Stop Blaming Humans for Bias in AI

Stop Blaming Humans for Bias in AI

The topic of bias gains currency each day. As AI becomes more pervasive, we’re seeing more examples of how AI delivers value but also can spread harm through bias inherent in the data sets that businesses use to train AI applications. And these fears are well-founded. It’s easy to find them:

  • A University of California-Berkeley study revealed that lenders charge higher rates to Black and Hispanic borrowers. According to the study, algorithmic strategic pricing uses machine learning to find shoppers who might do less comparison shopping and accept higher-priced offerings. This algorithm is biased against Blacks and Hispanics.
  • Including the word “transgender” in video titles has resulted in YouTubers receiving lower ad revenue on their videos. Commented Meg Green, a user experience researcher for Rocket Homes, “Being gay or being Black or being a trans woman does not mean these things are negative and that you don’t want to read this information. Anything about being bisexual and gay is pornographic and not acceptable for children, according to some biased data found with AI.”
  • A recent study showed that broadly targeted ads on Facebook for supermarket cashier positions were shown to an audience of 85 percent women, while jobs with taxi companies went to an audience that was approximately 75 percent Black. Miranda Bogen, a senior policy analyst at Upturn, noted, “This is a quintessential case of an algorithm reproducing bias from the real world, without human intervention.

Read more