Ex-Google engineer Blake Lemoine discusses sentient AI

ai regulation 2
ai regulation 2

Ex-Google engineer Blake Lemoine discusses why LaMDA and other AI systems may be considered sentient, and explains exactly how much AI systems know about consumers.

Software engineer Blake Lemoine worked with Google’s Ethical AI team on Language Model for Dialog Applications (LaMDA), examining the large language model for bias on topics such as sexual orientation, gender, identity, ethnicity, and religion

Over the course of several months, Lemoine, who identifies as a Christian mystic, hypothesized that LaMDA was a living being, based on his spiritual beliefs. Lemoine published transcripts of his conversations with LaMDA and blogs about AI ethics surrounding LaMDA.

In June, Google put Lemoine on administrative leave; last week, he was fired. In a statement, Google said Lemoine’s claims that LaMDA is sentient are « wholly unfounded. »

« It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information, » Google said in a statement. « We will continue our careful development of language models, and we wish Blake well. »

Read morehttps://www.techtarget.com/searchenterpriseai/feature/Ex-Google-engineer-Blake-Lemoine-discusses-sentient-AI