It’s August 2022, and by now you’ve no doubt read (or more likely seen) something about AI art by now. Whether it’s random jokes made for Twitter or paintings that look like they were made by actual human beings, artificial intelligence’s ability to create art has exploded onto the scene over the last few months, and while this has been great news for shitposts and fans of tech, it has also raised a number of important questions and concerns.
If you haven’t read or seen anything about the subject, AI art—or at least as it exists in the state we know it today—is, as Ahmed Elgammal writing in American Scientist so neatly puts it, made when “artists write algorithms not to follow a set of rules, but to ‘learn’ a specific aesthetic by analyzing thousands of images. The algorithm then tries to generate new images in adherence to the aesthetics it has learned.”
From a user’s perspective, this is most often done by entering a text prompt, so you can type something like “wizard standing on a hillside under a rainbow”, and an AI will attempt to give you a fairly decent approximation of that in image form. You could also type “Spongebob grieving for Batman’s parents” and you’ll get something just as close to what you’re thinking.
Basically, we now live in a world where machines have been fed millions upon millions of pieces of human endeavour, and are now using the cumulative data they’ve amassed to create their own works. This has been fun for casual users and interesting for tech enthusiasts, sure, but it has also created an ethical and copyright black hole, where everyone from artists to lawyers to engineers has very strong opinions on what this all means, for their jobs and for the nature of art itself.