chatbot prediction
It didn’t take long for Microsoft’s new AI-infused search engine chatbot — codenamed “Sydney” — to display a growing list of discomforting behaviors after it was introduced early in February, with weird outbursts ranging from unrequited declarations of love to painting some users as “enemies.”
As human-like as some of those exchanges appeared, they probably weren’t the early stirrings of a conscious machine rattling its cage. Instead, Sydney’s outbursts reflect its programming, absorbing huge quantities of digitized language and parroting back what its users ask for. Which is to say, it reflects our online selves back to us. And that shouldn’t have been surprising — chatbots’ habit of mirroring us back to ourselves goes back way further than Sydney’s rumination on whether there is a meaning to being a Bing search engine. In fact, it’s been there since the introduction of the first notable chatbot almost 50 years ago.
In 1966, MIT computer scientist Joseph Weizenbaum released ELIZA (named after the fictional Eliza Doolittle from George Bernard Shaw’s 1913 play Pygmalion), the first program that allowed some kind of plausible conversation between humans and machines. The process was simple: Modeled after the Rogerian style of psychotherapy, ELIZA would rephrase whatever speech input it was given in the form of a question. If you told it a conversation with your friend left you angry, it might ask, “Why do you feel angry?”
Ironically, though Weizenbaum had designed ELIZA to demonstrate how superficial the state of human-to-machine conversation was, it had the opposite effect. People were entranced, engaging in long, deep, and private conversations with a program that was only capable of reflecting users’ words back to them. Weizenbaum was so disturbed by the public response that he spent the rest of his life warning against the perils of letting computers — and, by extension, the field of AI he helped launch — play too large a role in society.
Mots-clés : cybersécurité, sécurité informatique, protection des données, menaces cybernétiques, veille cyber, analyse de vulnérabilités, sécurité des réseaux, cyberattaques, conformité RGPD, NIS2, DORA, PCIDSS, DEVSECOPS, eSANTE, intelligence artificielle, IA en cybersécurité, apprentissage automatique, deep learning, algorithmes de sécurité, détection des anomalies, systèmes intelligents, automatisation de la sécurité, IA pour la prévention des cyberattaques.
Bots et IA biaisées : une menace silencieuse pour la cybersécurité des entreprises Introduction Les…
Cloudflare en Panne : Causes Officielles, Impacts et Risques pour les Entreprises Le 5 décembre…
Introduction La cybersécurité est aujourd’hui une priorité mondiale. Récemment, la CISA (Cybersecurity and Infrastructure Security…
La transformation numérique face aux nouvelles menaces Le cloud computing s’impose aujourd’hui comme un…
Les attaques par déni de service distribué (DDoS) continuent d'évoluer en sophistication et en ampleur,…
Face à l'adoption croissante des technologies d'IA dans les PME, une nouvelle menace cybersécuritaire émerge…
This website uses cookies.