AI Chatbots Are Getting Better.

AI CHATBOT 1
AI CHATBOT 1

In 1950, the English computer scientist Alan Turing devised a test he called the imitation game: could a computer program ever convince a human interlocutor that he was talking to another human, rather than to a machine?

The Turing test, as it became known, is often thought of as a test of whether a computer could ever really “think.” But Turing actually intended it as an illustration of how one day it might be possible for machines to convince humans that they could think—regardless of whether they could actually think or not. Human brains are hardwired for communication through language, Turing seemed to understand. Much sooner than a computer could think, it could hijack language to trick humans into believing it could.

Seven decades later, in 2022, even the most cutting edge artificial intelligence (AI) systems cannot think in any way comparable to a human brain. But they can easily pass the Turing test. This summer, Google fired one of its engineers who had become convinced that one of its chatbots had reached sentience. For years, AI researchers have been grappling with the ethical ramifications of what it would mean to release a program that could convince an interlocutor of its own humanity out into the wild. Such a machine could lead people to believe false information. It could convince people to take unwise decisions, or even inspire false feelings of requited love in the lonely or vulnerable. To release such a program would surely be deeply unethical. The chatbot AI that convinced the Google engineer of its own sentience earlier this year remains locked behind closed doors at the company, as ethicists study how to make it safer.

Read more