In June 2022, Microsoft released the Microsoft Responsible AI Standard, v2 (PDF). Its stated purpose is to “define product development requirements for responsible AI”. Perhaps surprisingly, the document contains only one mention of bias in artificial intelligence (AI): algorithm developers need to be aware of the potential for users to over-rely on AI outputs (known as ‘automation bias’).
In short, Microsoft seems more concerned with bias from users aimed at its products, than bias from within its products adversely affecting users. This is good commercial responsibility (don’t say anything negative about our products), but poor social responsibility (there are many examples of algorithmic bias having a negative effect on individuals or groups of individuals).
Bias is one of three primary concerns about artificial intelligence in business that have not yet been solved: hidden bias creating false results; the potential for misuse (by users) and abuse (by attackers); and algorithms returning so many false positives that their use as part of automation is ineffective.
Academic concerns
When AI was first introduced into cybersecurity products it was described as a defensive silver bullet. There’s no doubt that it has some value, but there is a growing reaction against faulty algorithms, hidden bias, false positives, abuse of privacy, and potential for abuse by criminals, law enforcement and intelligence agencies.
According to Gary Marcus, professor of psychology and neural science at New York University (writing in Scientific American, June 6, 2022), the problem lies in the commercialization of a still developing science:
“The subplot here is that the biggest teams of researchers in AI are no longer to be found in the academy, where peer review used to be coin of the realm, but in corporations. And corporations, unlike universities, have no incentive to play fair. Rather than submitting their splashy new papers to academic scrutiny, they have taken to publication by press release, seducing journalists and sidestepping the peer review process. We know only what the companies want us to know.”
Mots-clés : cybersécurité, sécurité informatique, protection des données, menaces cybernétiques, veille cyber, analyse de vulnérabilités, sécurité des réseaux, cyberattaques, conformité RGPD, NIS2, DORA, PCIDSS, DEVSECOPS, eSANTE, intelligence artificielle, IA en cybersécurité, apprentissage automatique, deep learning, algorithmes de sécurité, détection des anomalies, systèmes intelligents, automatisation de la sécurité, IA pour la prévention des cyberattaques.
Bots et IA biaisées : une menace silencieuse pour la cybersécurité des entreprises Introduction Les…
Cloudflare en Panne : Causes Officielles, Impacts et Risques pour les Entreprises Le 5 décembre…
Introduction La cybersécurité est aujourd’hui une priorité mondiale. Récemment, la CISA (Cybersecurity and Infrastructure Security…
La transformation numérique face aux nouvelles menaces Le cloud computing s’impose aujourd’hui comme un…
Les attaques par déni de service distribué (DDoS) continuent d'évoluer en sophistication et en ampleur,…
Face à l'adoption croissante des technologies d'IA dans les PME, une nouvelle menace cybersécuritaire émerge…
This website uses cookies.