chatbot
A mental-health chatbot that veered off script—giving diet advice to people seeking help from an eating-disorder group—was programmed with generative AI without the group’s knowledge.
The bot, named Tessa, was the focus of social-media attention last week when users of the National Eating Disorder Association’s website reported the rogue advice. The bot incident illustrates how AI-enabled assistants can deliver unexpected and potentially dangerous results as they become a bigger part of daily life.
Michiel Rauws, chief executive of San Francisco software developer Cass, said that in 2022 his company rolled out an AI component to its chatbots, and that included Tessa.
Rauws said Cass acted in accordance with the terms of its contract with NEDA. NEDA, which didn’t pay for the service, took Tessa offline last week.
“We were not consulted about that and we did not authorize that,” said NEDA CEO Liz Thompson about the AI upgrade.
AI assistants trained in the language of therapy present an alluring—though risky—option as demand for physical and mental-health care explodes, and many people are untreated because of a global clinician shortage.
Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières Le 17…
L’Agence nationale de la sécurité des systèmes d'information (ANSSI) a publié un rapport sur les…
Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…
Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…
This website uses cookies.