Intelligence Artificielle

How IT leaders can embrace responsible AI

When artificial intelligence augments or even replaces human decisions, it amplifies good and bad outcomes alike.

There are numerous risks and potential harms created by AI systems, including bias and discrimination, financial or reputational loss, lack of transparency and explainability, or invasions of security and privacy. Responsible AI enables the right outcomes by resolving dilemmas rooted in delivering value versus tolerating risks.

Responsible AI must be a part of an organization’s wider AI strategy. Here are the steps that chief information officer and information technology leaders, in partnership with data and analytics leadership, can take to progress their organization towards a vision of responsible AI.

Define responsible AI

Responsible AI is an umbrella term for aspects of making appropriate business and ethical choices when adopting AI. It encompasses decisions around business and societal value, risk, trust, transparency, fairness, bias, mitigation, explainability, accountability, safety, privacy, regulatory compliance and more.

Before organizations design their AI strategy, they must define what responsible AI means within the context of their organization’s environment. There are many facets of responsible AI, but Gartner finds five principles to be the most common across different organizations.

These principles define responsible AI as that which is:

  • Human-centric and socially beneficial, serving human goals and supporting ethical and more efficient automation while relying on a human touch and common sense.
  • Fair so that individuals or groups are not systematically disadvantaged through AI-driven decisions, while addressing dissolution, isolation and polarization among users.
  • Transparent and explainable to build trust, confidence and understanding in AI systems.
  • Secure and safe to protect the interests and privacy of organizations and people while they interact with AI systems across different jurisdictions.
  • Accountable to create channels for recourse and establish rights for individuals.

Read morehttps://siliconangle.com/2022/09/11/leaders-can-embrace-responsible-ai/

Mots-clés : cybersécurité, sécurité informatique, protection des données, menaces cybernétiques, veille cyber, analyse de vulnérabilités, sécurité des réseaux, cyberattaques, conformité RGPD, NIS2, DORA, PCIDSS, DEVSECOPS, eSANTE, intelligence artificielle, IA en cybersécurité, apprentissage automatique, deep learning, algorithmes de sécurité, détection des anomalies, systèmes intelligents, automatisation de la sécurité, IA pour la prévention des cyberattaques.

Veille-cyber

Recent Posts

Bots et IA biaisées : menaces pour la cybersécurité

Bots et IA biaisées : une menace silencieuse pour la cybersécurité des entreprises Introduction Les…

10 heures ago

Cloudflare en Panne

Cloudflare en Panne : Causes Officielles, Impacts et Risques pour les Entreprises  Le 5 décembre…

10 heures ago

Alerte sur le Malware Brickstorm : Une Menace pour les Infrastructures Critiques

Introduction La cybersécurité est aujourd’hui une priorité mondiale. Récemment, la CISA (Cybersecurity and Infrastructure Security…

11 heures ago

Cloud Computing : État de la menace et stratégies de protection

  La transformation numérique face aux nouvelles menaces Le cloud computing s’impose aujourd’hui comme un…

1 jour ago

Attaque DDoS record : Cloudflare face au botnet Aisuru – Une analyse de l’évolution des cybermenaces

Les attaques par déni de service distribué (DDoS) continuent d'évoluer en sophistication et en ampleur,…

1 jour ago

Poèmes Pirates : La Nouvelle Arme Contre Votre IA

Face à l'adoption croissante des technologies d'IA dans les PME, une nouvelle menace cybersécuritaire émerge…

1 jour ago

This website uses cookies.