At least five large companies will introduce “bias bounties” or hacker competitions to identify bias in artificial intelligence (AI) algorithms, predicts the just-released “North American Predictions 2022” from Forrester.
Bias bounties are modeled on bug bounties, which reward hackers or coders (often, outside the organizations) who detect problems in security software. In late July, Twitter launched the first major bias bounty and awarded $3,500 to a student who proved that its image cropping algorithm favors lighter, slimmer and younger faces.
“Finding bias in machine learning (ML) models is difficult, and sometimes, companies find out about unintended ethical harms once they’ve already reached the public,” wrote Rumman Chowdhury, director of Twitter META, in a blog entry. “We want to change that.”
Mots-clés : cybersécurité, sécurité informatique, protection des données, menaces cybernétiques, veille cyber, analyse de vulnérabilités, sécurité des réseaux, cyberattaques, conformité RGPD, NIS2, DORA, PCIDSS, DEVSECOPS, eSANTE, intelligence artificielle, IA en cybersécurité, apprentissage automatique, deep learning, algorithmes de sécurité, détection des anomalies, systèmes intelligents, automatisation de la sécurité, IA pour la prévention des cyberattaques.






