As someone working in a creative field, I’ve never been concerned about a computer taking my job. I always felt confident that the tasks required of me as a photo editor for New York Magazine are too complex and messy — too human — for an artificial intelligence to perform. That is, until DALL-E 2, a sophisticated AI that generates original artwork based only on text input, opened to public beta last June.
It’s easy to lose hours on the r/dalle2 subreddit, where beta testers have been posting their work. More often than not, the only way to differentiate a DALL-E creation from a human-generated image is five colorful squares tucked in the bottom right corner of each composition — DALL-E’s signature. As I scrolled through images of Super Mario getting his citizenship at Ellis Island and Mona Lisa painting a portrait of da Vinci I couldn’t shake the question that town criers and elevator operators of yore must have confronted: Was my obsolescence on the horizon?
DALL-E, named after surrealist artist Salvador Dali and Pixar’s lovable garbage robot WALL-E, was released by San Francisco-based research lab OpenAI in January 2021. The first iteration felt like a curious novelty, but nothing more. The compositions were only remarkable because they were generated by AI. In contrast, DALL-E 2, which launched in January 2022, is lightyears ahead in image complexity and natural semantics; it’s easily one of the most advanced image generators in development, and it’s evolving at an astonishing speed. Last week, OpenAI launched a new Outpainting feature, which allows users to extend their canvas beyond its original borders, revealing, for example, the cluttered kitchen surrounding Johannes Vermeer’s Girl With a Pearl Earring.