Why is Meta’s new AI chatbot so bad?

Meta AI CHATBOT 1
Meta AI CHATBOT 1

Earlier this month, Meta (the corporation formerly known as Facebook) released an AI chatbot with the innocuous name Blenderbot that anyone in the US can talk with. Immediately, users all over the country started posting the AI’s takes condemning Facebook, while pointing out that, as has often been the case with language models like this one, it’s really easy to get the AI to spread racist stereotypes and conspiracy theories.

When I played with Blenderbot, I definitely saw my share of bizarre AI-generated conspiracy theories, like one about how big government is suppressing the true Bible, plus plenty of horrifying moral claims. (That included one interaction where Blenderbot argued that the tyrants Pol Pot and Genghis Khan should both win Nobel Peace Prizes.)

But that wasn’t what surprised me. We know language models, even advanced ones, still struggle with bias and truthfulness. What surprised me was that Blenderbot is really incompetent.

I spend a lot of time exploring language models. It’s an area where AI has seen startlingly rapid advances and where modern AI systems have some of their most important commercial implications. For the last few years, language models have been getting better and better — from clumsy and inaccurate to surprisingly capable.

Language models are used for all sorts of things, like identifying the topic of documents, translating between languages, and understanding human speech. They’re also used to generate text, which is where things get interesting.

(The preceding paragraph was generated by GPT-3, a language model. See what I mean?)

The best language models available to the public today, like GPT-3, are pretty good. But GPT-3 came out two years ago — ages, in AI time — and considerably better models now exist.

And then there’s Blenderbot.

Read more