The 3 things an AI must demonstrate to be considered sentient

The 3 things an AI must demonstrate to be considered sentient

A Google developer recently decided that one of the company’s chatbots, a large language model (LLM) called LaMBDA, had become sentient.

According to a report in the Washington Post, the developer identifies as a Christian and he believes that the machine has something akin to a soul — that it’s become sentient.

As is always the case, the “is it alive?” nonsense has lit up the news cycle — it’s a juicy story whether you’re imagining what it might be like if the dev was right or dunking on them for being so silly.

We don’t want to dunk on anyone here at Neural, but it’s flat out dangerous to put these kinds of ideas in people’s heads.

The more we, as a society, pretend that we’re “thiiiis close” to creating sentient machines, the easier it’ll be for bad actors, big tech, and snake oil startups to manipulate us with false claims about machine learning systems.

The burden of proof should be on the people making the claims. But what should that proof look like? If a chatbot says “I’m sentient,” who gets to decide if it really is or not?

Read more