In AI arms race, ethics may be the first casualty

ai arms
ai arms

As the tech world embraces ChatGPT and other generative AI programs, the industry’s longstanding pledges to deploy AI responsibly could quickly be swamped by beat-the-competition pressures.

Why it matters: Once again, tech’s leaders are playing a game of « build fast and ask questions later » with a new technology that’s likely to spark profound changes in society.

  • Social media started two decades ago with a similar rush to market. First came the excitement — later, the damage and regrets.

Catch up quick: While machine learning and related AI techniques hatched in labs over the last decade, scholars and critics sounded alarms about potential harms the technology could promote, including misinformation, bias, hate speech and harassment, loss of privacy and fraud.

  • In response, companies made reassuring statements about their commitment to ethics reviews and bias screening.
  • High-profile missteps — like Microsoft Research’s 2016 « Tay » Twitterbot, which got easily prompted to repeat offensive and racist statements — made tech giants reluctant to push their most advanced AI pilots out into the world.

Yes, but: Smaller companies and startups have much less at risk, financially and reputationally.

  • That explains why it was OpenAI — a relatively small maverick entrant in the field — rather than Google or Meta that kicked off the current generative-AI frenzy with the release of ChatGPT late last year.
  • Both companies have announced multiple generative-AI research projects, and many observers believe they’ve developed tools internally that meet or exceed ChatGPT’s abilities — but have not unveiled them for fear of offense or liability.

ChatGPT « is nothing revolutionary, » and other companies have matched it, Meta chief AI scientist Yann LeCun said recently.

  • In September, Meta announced its Make-a-Video tool, which generates videos from text prompts. And in November, the company released a demo of a generative AI for scientific research called Galactica.
  • But Meta took Galactica down after three days of scorching criticism from scholars that it generated unreliable information.

What’s next: Whatever restraint giants like Google and Meta have shown to date could now erode as they seek to demonstrate that they haven’t fallen behind.

Read more