IN BRIEF Numerous people start to believe they’re interacting with something sentient when they talk to AI chatbots, according to the CEO of Replika, an app that allows users to design their own virtual companions.
People can customize how their chatbots look and pay for extra features like certain personality traits on Replika. Millions have downloaded the app and many chat regularly to their made-up bots. Some even begin to think their digital pals are real entities that are sentient.
« We’re not talking about crazy people or people who are hallucinating or having delusions, » the company’s founder and CEO, Eugenia Kuyda, told Reuters. « They talk to AI and that’s the experience they have. »
A Google engineer made headlines last month when he said he believed one of the company’s language models was conscious. Blake Lemoine was largely ridiculed, but he doesn’t seem to be alone in anthropomorphizing AI.
These systems are not sentient, however, and instead trick humans into thinking they have some intelligence. They mimic language and regurgitate it somewhat randomly without having any understanding of language or the world they describe.
Still, Kuyda said humans can be swayed by the technology.
« We need to understand that [this] exists, just the way people believe in ghosts, » Kuyda said. « People are building relationships and believing in something. »
Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières Le 17…
L’Agence nationale de la sécurité des systèmes d'information (ANSSI) a publié un rapport sur les…
Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…
Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…
This website uses cookies.