The Case for Transparent AI

I can’t go on Facebook without seeing magicians.

I can trace it back to when I watched a video of America’s Got Talent. It started with singers, but soon it moved on to other categories, including illusionists. That was enough to tell Facebook’s algorithms that I had to be interested in magic and that it should show me more of what it deduced I wanted to see. Now I have to be careful, because if I click on any of that content, it will reinforce the algorithm’s notion that I must really be interested in card tricks, and pretty soon that’s all Facebook will ever show me. Even if it was all just a passing curiosity.

My experience is not new or particularly unique — Eli Pariser warned us about social media “filter bubbles” back in 2011 — but it’s a handy illustration of the dark places an algorithm can take you. I may get a bit annoyed when Facebook serves up a David Blaine video, but filter bubbles can be downright dangerous, turning otherwise neutral platforms into breeding grounds for all sorts of ugly ideas.

Where does my data go?

The truth is, most people have little understanding of how AI works — they just know that computers are collecting their data. And that can be scary.

Where does that data go, and who has access to it? Is it being used for my benefit, or is it being harnessed to sell me things and increase corporate profits? If you are offering a product or service with AI built into it, these are the questions your users and customers will ask. If someone is entrusting you with their data, you don’t just owe them answers. You owe them transparency.

When we were first designing Charli — our software that uses AI to help customers automate tasks and keep track of all their content and other “stuff” — we envisioned it as a “fire-and-forget” product. In other words, we were asking people to hand their data over to Charli and let the AI worry about it.

Source : 

 

Mots-clés : cybersécurité, sécurité informatique, protection des données, menaces cybernétiques, veille cyber, analyse de vulnérabilités, sécurité des réseaux, cyberattaques, conformité RGPD, NIS2, DORA, PCIDSS, DEVSECOPS, eSANTE, intelligence artificielle, IA en cybersécurité, apprentissage automatique, deep learning, algorithmes de sécurité, détection des anomalies, systèmes intelligents, automatisation de la sécurité, IA pour la prévention des cyberattaques.

Veille-cyber

Share
Published by
Veille-cyber

Recent Posts

Bots et IA biaisées : menaces pour la cybersécurité

Bots et IA biaisées : une menace silencieuse pour la cybersécurité des entreprises Introduction Les…

3 semaines ago

Cloudflare en Panne

Cloudflare en Panne : Causes Officielles, Impacts et Risques pour les Entreprises  Le 5 décembre…

3 semaines ago

Alerte sur le Malware Brickstorm : Une Menace pour les Infrastructures Critiques

Introduction La cybersécurité est aujourd’hui une priorité mondiale. Récemment, la CISA (Cybersecurity and Infrastructure Security…

3 semaines ago

Cloud Computing : État de la menace et stratégies de protection

  La transformation numérique face aux nouvelles menaces Le cloud computing s’impose aujourd’hui comme un…

4 semaines ago

Attaque DDoS record : Cloudflare face au botnet Aisuru – Une analyse de l’évolution des cybermenaces

Les attaques par déni de service distribué (DDoS) continuent d'évoluer en sophistication et en ampleur,…

4 semaines ago

Poèmes Pirates : La Nouvelle Arme Contre Votre IA

Face à l'adoption croissante des technologies d'IA dans les PME, une nouvelle menace cybersécuritaire émerge…

4 semaines ago

This website uses cookies.