IDENTIFY AI
ChatGPT and other AI systems have emerged as hugely useful assistants. Various businesses have already incorporated the technology to help their employees, such as assisting lawyers draft contracts, customer service agents deal with queries and to support programmers developing code.
But there is increasing concern that the same technology can be put to malicious use. For example, chatbots capable of realistic human responses could perform new kinds of denial service attacks, such tying up all the customer service agents at a business or all the emergency service operators at a 911 call center.
That represents a considerable threat. What’s needed, of course, is a fast and reliable way to distinguish between GPT-enabled bots and real humans.
Enter Hong Wang at the University of California, Santa Barbara, and colleagues, who are searching for tasks that are hard for GPT bots to answer but simple for humans (and vice versa). Their goal is to distinguish between them using a single question and they have found several that can do the trick (for now).
Distinguishing between bots and humans has long been an issue. In 1950, Alan Turing described a test to tell humans from sufficiently advanced computers, the co-called Turing Test.
Qu’est-ce que la cybersécurité ? Définition, enjeux et bonnes pratiques en 2025 La cybersécurité est…
Cybersécurité : les établissements de santé renforcent leur défense grâce aux exercices de crise Face…
L'IA : opportunité ou menace ? Les DSI de la finance s'interrogent Alors que l'intelligence…
Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le…
La transformation numérique du secteur financier n'a pas que du bon : elle augmente aussi…
Introduction La cybersécurité est devenue une priorité stratégique pour toutes les entreprises, grandes ou petites.…
This website uses cookies.