How a Chatbot Went Rogue


mental-health chatbot that veered off script—giving diet advice to people seeking help from an eating-disorder group—was programmed with generative AI without the group’s knowledge.

The bot, named Tessa, was the focus of social-media attention last week when users of the National Eating Disorder Association’s website reported the rogue advice. The bot incident illustrates how AI-enabled assistants can deliver unexpected and potentially dangerous results as they become a bigger part of daily life.

Michiel Rauws, chief executive of San Francisco software developer Cass, said that in 2022 his company rolled out an AI component to its chatbots, and that included Tessa.

Rauws said Cass acted in accordance with the terms of its contract with NEDA. NEDA, which didn’t pay for the service, took Tessa offline last week.

“We were not consulted about that and we did not authorize that,” said NEDA CEO Liz Thompson about the AI upgrade.

AI assistants trained in the language of therapy present an alluring—though risky—option as demand for physical and mental-health care explodes, and many people are untreated because of a global clinician shortage.