ARTIFICIAL INTELLIGENCE RESEARCHERS are facing a problem of accountability: How do you try to ensure decisions are responsible when the decision maker is not a responsible person, but rather an algorithm? Right now, only a handful of people and organizations have the power—and resources—to automate decision-making.
Organizations rely on AI to approve a loan or shape a defendant’s sentence. But the foundations upon which these intelligent systems are built are susceptible to bias. Bias from the data, from the programmer, and from a powerful company’s bottom line can snowball into unintended consequences. This is the reality AI researcher Timnit Gebru cautioned against at a RE:WIRED talk on Tuesday.
“There were companies purporting [to assess] someone’s likelihood of determining a crime again,” Gebru said. “That was terrifying for me.”
Gebru was a star engineer at Google who specialized in AI ethics. She co-led a team tasked with standing guard against algorithmic racism, sexism, and other bias. Gebru also cofounded the nonprofit Black in AI, which seeks to improve inclusion, visibility, and health of Black people in her field.
Last year, Google forced her out. But she hasn’t given up her fight to prevent unintended damage from machine learning algorithms.