If you grew up in a covered 12-foot hole in the Earth, and only had a laptop running the latest version of the Stable Diffusion AI image generator, then you would believe that there was no such thing as a woman engineer.
The U.S. Bureau of Labor Statistics shows that women are massively underrepresented in the engineering field, but averages from 2018 show that women make up around a fifth of people in engineering professions. But if you use Stable Diffusion to display an “engineer” all of them are men. If Stable Diffusion matched reality, then out of nine images based on a prompt “engineer,” 1.8 of those images should display women.
Artificial intelligence researcher for Hugging Face, Sasha Luccioni, created a simple tool that offers perhaps the most effective way to show biases in the machine learning model that creates images. The Stable Diffusion Explorer shows what the AI image generator thinks is an “ambitious CEO” versus a “supportive CEO.” That former descriptor will get the generator to show a diverse host of men in various black and blue suits. The latter descriptor displays an equal number of both women and men.
The topic of AI image bias is nothing new, but questions of just how bad it is has been relatively unexplored, especially as OpenAI’s DALL-E 2 first went into its limited beta earlier this year. In April, OpenAI published a Risks and Limitations document noting their system can reinforce stereotypes. Their system produced images that overrepresented white-passing people and images often representative of the west, such as western-style weddings. They also showed how some prompts for “builder” would show male-centric while a “flight attendant” would be female-centric.
The company has previously said it was evaluating DALL-E 2’s biases, and after Gizmodo reached out, a spokesperson pointed to a July blog that proposed their system was growing better at producing images of diverse backgrounds.