In 1950, the English computer scientist Alan Turing devised a test he called the imitation game: could a computer program ever convince a human interlocutor that he was talking to another human, rather than to a machine?

The Turing test, as it became known, is often thought of as a test of whether a computer could ever really “think.” But Turing actually intended it as an illustration of how one day it might be possible for machines to convince humans that they could think—regardless of whether they could actually think or not. Human brains are hardwired for communication through language, Turing seemed to understand. Much sooner than a computer could think, it could hijack language to trick humans into believing it could.

Seven decades later, in 2022, even the most cutting edge artificial intelligence (AI) systems cannot think in any way comparable to a human brain. But they can easily pass the Turing test. This summer, Google fired one of its engineers who had become convinced that one of its chatbots had reached sentience. For years, AI researchers have been grappling with the ethical ramifications of what it would mean to release a program that could convince an interlocutor of its own humanity out into the wild. Such a machine could lead people to believe false information. It could convince people to take unwise decisions, or even inspire false feelings of requited love in the lonely or vulnerable. To release such a program would surely be deeply unethical. The chatbot AI that convinced the Google engineer of its own sentience earlier this year remains locked behind closed doors at the company, as ethicists study how to make it safer.

Read more

Mots-clés : cybersécurité, sécurité informatique, protection des données, menaces cybernétiques, veille cyber, analyse de vulnérabilités, sécurité des réseaux, cyberattaques, conformité RGPD, NIS2, DORA, PCIDSS, DEVSECOPS, eSANTE, intelligence artificielle, IA en cybersécurité, apprentissage automatique, deep learning, algorithmes de sécurité, détection des anomalies, systèmes intelligents, automatisation de la sécurité, IA pour la prévention des cyberattaques.

Veille-cyber

Recent Posts

Bots et IA biaisées : menaces pour la cybersécurité

Bots et IA biaisées : une menace silencieuse pour la cybersécurité des entreprises Introduction Les…

3 semaines ago

Cloudflare en Panne

Cloudflare en Panne : Causes Officielles, Impacts et Risques pour les Entreprises  Le 5 décembre…

3 semaines ago

Alerte sur le Malware Brickstorm : Une Menace pour les Infrastructures Critiques

Introduction La cybersécurité est aujourd’hui une priorité mondiale. Récemment, la CISA (Cybersecurity and Infrastructure Security…

3 semaines ago

Cloud Computing : État de la menace et stratégies de protection

  La transformation numérique face aux nouvelles menaces Le cloud computing s’impose aujourd’hui comme un…

3 semaines ago

Attaque DDoS record : Cloudflare face au botnet Aisuru – Une analyse de l’évolution des cybermenaces

Les attaques par déni de service distribué (DDoS) continuent d'évoluer en sophistication et en ampleur,…

3 semaines ago

Poèmes Pirates : La Nouvelle Arme Contre Votre IA

Face à l'adoption croissante des technologies d'IA dans les PME, une nouvelle menace cybersécuritaire émerge…

3 semaines ago

This website uses cookies.