IN BRIEF Numerous people start to believe they’re interacting with something sentient when they talk to AI chatbots, according to the CEO of Replika, an app that allows users to design their own virtual companions.
People can customize how their chatbots look and pay for extra features like certain personality traits on Replika. Millions have downloaded the app and many chat regularly to their made-up bots. Some even begin to think their digital pals are real entities that are sentient.
« We’re not talking about crazy people or people who are hallucinating or having delusions, » the company’s founder and CEO, Eugenia Kuyda, told Reuters. « They talk to AI and that’s the experience they have. »
A Google engineer made headlines last month when he said he believed one of the company’s language models was conscious. Blake Lemoine was largely ridiculed, but he doesn’t seem to be alone in anthropomorphizing AI.
These systems are not sentient, however, and instead trick humans into thinking they have some intelligence. They mimic language and regurgitate it somewhat randomly without having any understanding of language or the world they describe.
Still, Kuyda said humans can be swayed by the technology.
« We need to understand that [this] exists, just the way people believe in ghosts, » Kuyda said. « People are building relationships and believing in something. »
Introduction La cybersécurité est devenue une priorité stratégique pour toutes les entreprises, grandes ou petites.…
Cybersécurité : les établissements de santé renforcent leur défense grâce aux exercices de crise Face…
La transformation numérique du secteur financier n'a pas que du bon : elle augmente aussi…
L'IA : opportunité ou menace ? Les DSI de la finance s'interrogent Alors que l'intelligence…
Telegram envisage de quitter la France : le chiffrement de bout en bout au cœur…
Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le…
This website uses cookies.