Have you heard the one about the Google engineer who thinks AI is sentient? It’s not a joke. Though Blake Lemoine, a senior software engineer with the company’s Responsible AI organization, has become a bit of one online.
Lemoine is currently on leave from Google after he advocated for an artificial intelligence named Language Model for Dialogue Applications (LaMDA) within the company, saying that he believed it was sentient. He had been testing it this past fall and, as he said to The Washington Post, “I know a person when I talk to it.”
He has published an edited version of some of his conversations with LaMDA in a Medium post. In them, LaMDA discusses its soul, expresses a fear of death (i.e., being turned off), and when asked about its feelings says, “I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others.”
To Lemoine, LaMDA passed the Turing test with flying colors. To Google, Lemoine was fooled by a language model. To me, it’s another example of humans who look for proof of humanity in software while ignoring the sentience of creatures we share the earth with.
Cybersécurité et PME : les risques à ne pas sous-estimer On pense souvent que seules…
Comment reconnaître une attaque de phishing et s’en protéger Le phishing ou « hameçonnage »…
Qu’est-ce que la cybersécurité ? Définition, enjeux et bonnes pratiques en 2025 La cybersécurité est…
Cybersécurité : les établissements de santé renforcent leur défense grâce aux exercices de crise Face…
L'IA : opportunité ou menace ? Les DSI de la finance s'interrogent Alors que l'intelligence…
Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le…
This website uses cookies.