Meta AI CHATBOT 1
Earlier this month, Meta (the corporation formerly known as Facebook) released an AI chatbot with the innocuous name Blenderbot that anyone in the US can talk with. Immediately, users all over the country started posting the AI’s takes condemning Facebook, while pointing out that, as has often been the case with language models like this one, it’s really easy to get the AI to spread racist stereotypes and conspiracy theories.
When I played with Blenderbot, I definitely saw my share of bizarre AI-generated conspiracy theories, like one about how big government is suppressing the true Bible, plus plenty of horrifying moral claims. (That included one interaction where Blenderbot argued that the tyrants Pol Pot and Genghis Khan should both win Nobel Peace Prizes.)
But that wasn’t what surprised me. We know language models, even advanced ones, still struggle with bias and truthfulness. What surprised me was that Blenderbot is really incompetent.
I spend a lot of time exploring language models. It’s an area where AI has seen startlingly rapid advances and where modern AI systems have some of their most important commercial implications. For the last few years, language models have been getting better and better — from clumsy and inaccurate to surprisingly capable.
Language models are used for all sorts of things, like identifying the topic of documents, translating between languages, and understanding human speech. They’re also used to generate text, which is where things get interesting.
(The preceding paragraph was generated by GPT-3, a language model. See what I mean?)
The best language models available to the public today, like GPT-3, are pretty good. But GPT-3 came out two years ago — ages, in AI time — and considerably better models now exist.
And then there’s Blenderbot.
Mots-clés : cybersécurité, sécurité informatique, protection des données, menaces cybernétiques, veille cyber, analyse de vulnérabilités, sécurité des réseaux, cyberattaques, conformité RGPD, NIS2, DORA, PCIDSS, DEVSECOPS, eSANTE, intelligence artificielle, IA en cybersécurité, apprentissage automatique, deep learning, algorithmes de sécurité, détection des anomalies, systèmes intelligents, automatisation de la sécurité, IA pour la prévention des cyberattaques.
Bots et IA biaisées : une menace silencieuse pour la cybersécurité des entreprises Introduction Les…
Cloudflare en Panne : Causes Officielles, Impacts et Risques pour les Entreprises Le 5 décembre…
Introduction La cybersécurité est aujourd’hui une priorité mondiale. Récemment, la CISA (Cybersecurity and Infrastructure Security…
La transformation numérique face aux nouvelles menaces Le cloud computing s’impose aujourd’hui comme un…
Les attaques par déni de service distribué (DDoS) continuent d'évoluer en sophistication et en ampleur,…
Face à l'adoption croissante des technologies d'IA dans les PME, une nouvelle menace cybersécuritaire émerge…
This website uses cookies.