Ageism, sexism, classism and more: 7 examples of bias in AI-generated images

bias ai
bias ai

If you’ve been online much recently, chances are you’ve seen some of the fantastical imagery created by text-to-image generators such as Midjourney and DALL-E 2. This includes everything from the naturalistic (think a soccer player’s headshot) to the surreal (think a dog in space).

Creating images using AI generators has never been simpler. At the same time, however, these outputs can reproduce biases and deepen inequalities, as our latest research shows.

How do AI image generators work?

AI-based image generators use machine-learning models that take a text input and produce one or more images matching the description. Training these models requires massive datasets with millions of images.

Although Midjourney is opaque about the exact way its algorithms work, most AI image generators use a process called diffusion. Diffusion models work by adding random “noise” to training data, and then learning to recover the data by removing this noise. The model repeats this process until it has an image that matches the prompt.

Source