IDENTIFY AI
ChatGPT and other AI systems have emerged as hugely useful assistants. Various businesses have already incorporated the technology to help their employees, such as assisting lawyers draft contracts, customer service agents deal with queries and to support programmers developing code.
But there is increasing concern that the same technology can be put to malicious use. For example, chatbots capable of realistic human responses could perform new kinds of denial service attacks, such tying up all the customer service agents at a business or all the emergency service operators at a 911 call center.
That represents a considerable threat. What’s needed, of course, is a fast and reliable way to distinguish between GPT-enabled bots and real humans.
Enter Hong Wang at the University of California, Santa Barbara, and colleagues, who are searching for tasks that are hard for GPT bots to answer but simple for humans (and vice versa). Their goal is to distinguish between them using a single question and they have found several that can do the trick (for now).
Distinguishing between bots and humans has long been an issue. In 1950, Alan Turing described a test to tell humans from sufficiently advanced computers, the co-called Turing Test.
Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières Le 17…
L’Agence nationale de la sécurité des systèmes d'information (ANSSI) a publié un rapport sur les…
Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…
Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…
This website uses cookies.