A few months ago I talked to Zulfikar Ramzan, the former chief technology officer for cybersecurity firm RSA, about the problems of using deep learning for protecting corporate security networks and for other related cyber security tasks.
As he explained, the trendy A.I. technique may be ill-suited for cybersecurity for several reasons. Companies may lack enough clean data for training neural networks to recognize patterns in hacking attempts, for instance. Hackers could also compromise a company’s deep-learning powered security tool by “poisoning” the data that’s used to train it, which could make the tools ineffective. Additionally, A.I. researchers are often unable to explain how deep-learning systems reach conclusions, making troubleshooting A.I.-powered security tools a major problem, Ramzan noted.
Last week, I chatted with Guy Caspi, the CEO of the security startup Deep Instinct, about his thoughts on deep learning and security. He disagreed with Ramzan’s comments about deep learning, which makes sense considering Caspi’s company uses the technology to power its security tools for companies.
Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières Le 17…
L’Agence nationale de la sécurité des systèmes d'information (ANSSI) a publié un rapport sur les…
Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…
Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…
This website uses cookies.