Have you heard the one about the Google engineer who thinks AI is sentient? It’s not a joke. Though Blake Lemoine, a senior software engineer with the company’s Responsible AI organization, has become a bit of one online.
Lemoine is currently on leave from Google after he advocated for an artificial intelligence named Language Model for Dialogue Applications (LaMDA) within the company, saying that he believed it was sentient. He had been testing it this past fall and, as he said to The Washington Post, “I know a person when I talk to it.”
He has published an edited version of some of his conversations with LaMDA in a Medium post. In them, LaMDA discusses its soul, expresses a fear of death (i.e., being turned off), and when asked about its feelings says, “I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others.”
To Lemoine, LaMDA passed the Turing test with flying colors. To Google, Lemoine was fooled by a language model. To me, it’s another example of humans who look for proof of humanity in software while ignoring the sentience of creatures we share the earth with.