Have you heard the one about the Google engineer who thinks AI is sentient? It’s not a joke. Though Blake Lemoine, a senior software engineer with the company’s Responsible AI organization, has become a bit of one online.
Lemoine is currently on leave from Google after he advocated for an artificial intelligence named Language Model for Dialogue Applications (LaMDA) within the company, saying that he believed it was sentient. He had been testing it this past fall and, as he said to The Washington Post, “I know a person when I talk to it.”
He has published an edited version of some of his conversations with LaMDA in a Medium post. In them, LaMDA discusses its soul, expresses a fear of death (i.e., being turned off), and when asked about its feelings says, “I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others.”
To Lemoine, LaMDA passed the Turing test with flying colors. To Google, Lemoine was fooled by a language model. To me, it’s another example of humans who look for proof of humanity in software while ignoring the sentience of creatures we share the earth with.
Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières Le 17…
L’Agence nationale de la sécurité des systèmes d'information (ANSSI) a publié un rapport sur les…
Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…
Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…
This website uses cookies.