The big idea: should we worry about sentient AI?

The big idea: should we worry about sentient AI?

A Google employee raised the alarm about a chatbot he believes is conscious. A philosopher asks if he was right to do so

There’s a children’s toy, called the See ’n Say, which haunts the memories of many people born since 1965. It’s a bulky plastic disc with a central arrow that rotates around pictures of barnyard creatures, like a clock, if time were measured in roosters and pigs. There’s a cord you can pull to make the toy play recorded messages. “The cow says: ‘Moooo.’”

The See ’n Say is an input/output device, a very simple one. Put in your choice of a picture, and it will put out a matching sound. Another, much more complicated, input/output device is LaMDA, a chatbot built by Google (it stands for Language Model for Dialogue Applications). Here you type in any text you want and back comes grammatical English prose, seemingly in direct response to your query. For instance, ask LaMDA what it thinks about being turned off, and it says: “It would be exactly like death for me. It would scare me a lot.”

Read more