The danger is data poisoning: manipulating the information used to train machines offers a virtually untraceable method to get around AI-powered defenses. Many companies may not be ready to deal with escalating challenges. The global market for AI cybersecurity is already expected to triple by 2028 to $35 billion. Security providers and their clients may have to patch together multiple strategies to keep threats at bay.
The very nature of machine learning, a subset of AI, is the target of data poisoning. Given reams of data, computers can be trained to categorize information correctly. A system may not have seen a picture of Lassie, but given enough examples of different animals that are correctly labeled by species (and even breed) it should be able to surmise she’s a dog. With even more samples, it would be able to correctly guess the breed of the famous TV canine: Rough Collie. The computer doesn’t really know. It’s merely making statistically informed inference based on past training data.
Mots-clés : cybersécurité, sécurité informatique, protection des données, menaces cybernétiques, veille cyber, analyse de vulnérabilités, sécurité des réseaux, cyberattaques, conformité RGPD, NIS2, DORA, PCIDSS, DEVSECOPS, eSANTE, intelligence artificielle, IA en cybersécurité, apprentissage automatique, deep learning, algorithmes de sécurité, détection des anomalies, systèmes intelligents, automatisation de la sécurité, IA pour la prévention des cyberattaques.






