Advanced AI systems can mimic human conversation with remarkable authenticity. Now researchers are fighting back with carefully crafted questions that force AIs to reveal themselves.
ChatGPT and other AI systems have emerged as hugely useful assistants. Various businesses have already incorporated the technology to help their employees, such as assisting lawyers draft contracts, customer service agents deal with queries and to support programmers developing code.
But there is increasing concern that the same technology can be put to malicious use. For example, chatbots capable of realistic human responses could perform new kinds of denial service attacks, such tying up all the customer service agents at a business or all the emergency service operators at a 911 call center.
That represents a considerable threat. What’s needed, of course, is a fast and reliable way to distinguish between GPT-enabled bots and real humans.
Enter Hong Wang at the University of California, Santa Barbara, and colleagues, who are searching for tasks that are hard for GPT bots to answer but simple for humans (and vice versa). Their goal is to distinguish between them using a single question and they have found several that can do the trick (for now).
Distinguishing between bots and humans has long been an issue. In 1950, Alan Turing described a test to tell humans from sufficiently advanced computers, the co-called Turing Test.