New Study Warns of Gender and Racial Biases in Robots

New Study Warns of Gender and Racial Biases in Robots

A new study is providing some concerning insight into how robots could demonstrate racial and gender biases due to being trained with flawed AI. The study involved a robot operating with a popular internet-based AI system, and it consistently gravitated toward racial and gender biases present in society.

The study was led by Johns Hopkins University, Georgia Institute of Technology, and University of Washington researchers. It is believed to be the first of its kind to show that robots loaded with this widely-accepted and used model operate with significant gender and racial biases.

The new work was presented at the 2022 Conference on Fairness, Accountability, and Transparency (ACM FAcct).

Flawed Neural Network Models

Andrew Hundt is an author of the research and a postdoctoral fellow at Georgia Tech. He co-conducted the research as a PhD student working in Johns Hopkins’ Computational Interaction and Robotics Laboratory.

“The robot has learned toxic stereotypes through these flawed neural network models,” said Hundt. “We’re at risk of creating a generation of racist and sexist robots but people and organizations have decided it’s OK to create these products without addressing the issues.”

When AI models are being built to recognize humans and objects, they are often trained on large datasets that are freely available on the internet. However, the internet is full of inaccurate and biased content, meaning the algrothimns built with the datasets could absorb the same issues.

Read more