How Will We Know If AI Is Conscious? Neuroscientists Now Have a Checklist

ai conscious
ai conscious

Recently I had what amounted to a therapy session with ChatGPT. We talked about a recurring topic that I’ve obsessively inundated my friends with, so I thought I’d spare them the déjà vu. As expected, the AI’s responses were on point, sympathetic, and felt so utterly human.

As a tech writer, I know what’s happening under the hood: a swarm of digital synapses are trained on an internet’s worth of human-generated text to spit out favorable responses. Yet the interaction felt so real, and I had to constantly remind myself I was chatting with code—not a conscious, empathetic being on the other end.

Or was I? With generative AI increasingly delivering seemingly human-like responses, it’s easy to emotionally assign a sort of “sentience” to the algorithm (and no, ChatGPT isn’t conscious). In 2021, Blake Lemoine at Google stirred up a media firestorm by proclaiming that one of the chatbots he worked on, LaMDA, was sentient—and he subsequently got fired.

But most deep learning models are loosely based on the brain’s inner workings. AI agents are increasingly endowed with human-like decision-making algorithms. The idea that machine intelligence could become sentient one day no longer seems like science fiction.

How could we tell if machine brains one day gained sentience? The answer may be based on our own brains.

preprint paper authored by 19 neuroscientists, philosophers, and computer scientists, including Dr. Robert Long from the Center for AI Safety and Dr. Yoshua Bengio from the University of Montreal, argues that the neurobiology of consciousness may be our best bet. Rather than simply studying an AI agent’s behavior or responses—for example, during a chat—matching its responses to theories of human consciousness could provide a more objective ruler.

Source