chatbot
A mental-health chatbot that veered off script—giving diet advice to people seeking help from an eating-disorder group—was programmed with generative AI without the group’s knowledge.
The bot, named Tessa, was the focus of social-media attention last week when users of the National Eating Disorder Association’s website reported the rogue advice. The bot incident illustrates how AI-enabled assistants can deliver unexpected and potentially dangerous results as they become a bigger part of daily life.
Michiel Rauws, chief executive of San Francisco software developer Cass, said that in 2022 his company rolled out an AI component to its chatbots, and that included Tessa.
Rauws said Cass acted in accordance with the terms of its contract with NEDA. NEDA, which didn’t pay for the service, took Tessa offline last week.
“We were not consulted about that and we did not authorize that,” said NEDA CEO Liz Thompson about the AI upgrade.
AI assistants trained in the language of therapy present an alluring—though risky—option as demand for physical and mental-health care explodes, and many people are untreated because of a global clinician shortage.
Sécurité des mots de passe : bonnes pratiques pour éviter les failles La sécurité des…
Ransomware : comment prévenir et réagir face à une attaque Le ransomware est l’une des…
Cybersécurité et e-commerce : protéger vos clients et vos ventes En 2025, les sites e-commerce…
Les ransomwares : comprendre et se défendre contre cette menace En 2025, les ransomwares représentent…
RGPD et cybersécurité : comment rester conforme en 2025 Depuis sa mise en application en…
VPN : un outil indispensable pour protéger vos données Le VPN, ou « Virtual Private…
This website uses cookies.