How Google’s 2021 AI ethics debate foreshadowed the future

ai ethics debate
ai ethics debate

Two years ago, AI researchers published a hot-button research paper on the tech behind Bard, ChatGPT, and more.

After Google debuted its new AI chatbot, Bard, something unexpected happened: After the tool made a mistake in a promotional video, Google’s shares dropped $100 billion in one day.

Criticism of the tool’s reportedly rushed debut harks back to an AI ethics controversy at Google two years ago, when the company’s own researchers warned about the development of language models moving too fast without robust, responsible AI frameworks in place.

In 2021, the technology became central to an internal-debate-turned-national-headline after members of Google’s AI ethics team, including Timnit Gebru and Margaret Mitchell,wrote a paper on the dangers of large language models (LLMs). The research paper—called “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜”—set off a complex chain of events that led to both women being fired and eventually, the restructuring of Google’s responsible AI department. Two years later, the concerns the researchers raised are more relevant than ever.

“The Stochastic Parrots paper was pretty prescient, insofar as it definitely pointed out a lot of issues that we’re still working through now,” Alex Hanna, a former member of Google’s AI ethics team who is now director of research at the Distributed AI Research Institute founded by Gebru, told us.

Since the paper’s publication, buzz and debate about LLMs—one of the biggest AI advances in recent years—have gripped the tech industry and the business world at large. The generative AI sector raised $1.4 billion last year alone, according to Pitchbook data, and that doesn’t include the two megadeals that opened this year between Microsoft and OpenAI and Google and Anthropic.

Source