Intelligence Artificielle

Why companies need artificial intelligence explainability

Creating successful artificial intelligence programs doesn’t end with building the right AI system.  These programs also need to be integrated into an organization, and stakeholders — particularly employees and customers — need to trust that the AI program is accurate and trustworthy.

This is the case for building enterprisewide artificial intelligence explainability, according to a new research briefing by Ida Someh, Barbara Wixom, and Cynthia Beath of the MIT Center for Information Systems Research. The researchers define artificial intelligence explainability as “the ability to manage AI initiatives in ways that ensure models are value-generating, compliant, representative, and reliable.”

The researchers identified four characteristics of artificial intelligence programs that can make it hard for stakeholders to trust them, and ways they can be overcome:

1. Unproven value. Because artificial intelligence is still relatively new, there isn’t an extensive list of proven use cases. Leaders are often uncertain if and how their company will see returns from AI programs.

To address this, companies need to create value formulation practices, which help people substantiate how AI can be a good investment in ways that are appealing to a variety of stakeholders.

2. Model opacity. Artificial intelligence relies on complex math and statistics, so it can be hard to tell if a model is producing accurate results and is compliant and ethical.

To address this, companies should develop decision tracing practices, which help artificial intelligence teams unravel the mathematics and computations behind models and convey how they work to the people who use them. These practices can include using visuals like diagrams and charts.

Read more

Mots-clés : cybersécurité, sécurité informatique, protection des données, menaces cybernétiques, veille cyber, analyse de vulnérabilités, sécurité des réseaux, cyberattaques, conformité RGPD, NIS2, DORA, PCIDSS, DEVSECOPS, eSANTE, intelligence artificielle, IA en cybersécurité, apprentissage automatique, deep learning, algorithmes de sécurité, détection des anomalies, systèmes intelligents, automatisation de la sécurité, IA pour la prévention des cyberattaques.

Veille-cyber

Recent Posts

Bots et IA biaisées : menaces pour la cybersécurité

Bots et IA biaisées : une menace silencieuse pour la cybersécurité des entreprises Introduction Les…

2 jours ago

Cloudflare en Panne

Cloudflare en Panne : Causes Officielles, Impacts et Risques pour les Entreprises  Le 5 décembre…

2 jours ago

Alerte sur le Malware Brickstorm : Une Menace pour les Infrastructures Critiques

Introduction La cybersécurité est aujourd’hui une priorité mondiale. Récemment, la CISA (Cybersecurity and Infrastructure Security…

2 jours ago

Cloud Computing : État de la menace et stratégies de protection

  La transformation numérique face aux nouvelles menaces Le cloud computing s’impose aujourd’hui comme un…

2 jours ago

Attaque DDoS record : Cloudflare face au botnet Aisuru – Une analyse de l’évolution des cybermenaces

Les attaques par déni de service distribué (DDoS) continuent d'évoluer en sophistication et en ampleur,…

2 jours ago

Poèmes Pirates : La Nouvelle Arme Contre Votre IA

Face à l'adoption croissante des technologies d'IA dans les PME, une nouvelle menace cybersécuritaire émerge…

3 jours ago

This website uses cookies.