A few months ago I talked to Zulfikar Ramzan, the former chief technology officer for cybersecurity firm RSA, about the problems of using deep learning for protecting corporate security networks and for other related cyber security tasks.
As he explained, the trendy A.I. technique may be ill-suited for cybersecurity for several reasons. Companies may lack enough clean data for training neural networks to recognize patterns in hacking attempts, for instance. Hackers could also compromise a company’s deep-learning powered security tool by “poisoning” the data that’s used to train it, which could make the tools ineffective. Additionally, A.I. researchers are often unable to explain how deep-learning systems reach conclusions, making troubleshooting A.I.-powered security tools a major problem, Ramzan noted.
Last week, I chatted with Guy Caspi, the CEO of the security startup Deep Instinct, about his thoughts on deep learning and security. He disagreed with Ramzan’s comments about deep learning, which makes sense considering Caspi’s company uses the technology to power its security tools for companies.
Comment reconnaître une attaque de phishing et s’en protéger Le phishing ou « hameçonnage »…
Qu’est-ce que la cybersécurité ? Définition, enjeux et bonnes pratiques en 2025 La cybersécurité est…
Cybersécurité : les établissements de santé renforcent leur défense grâce aux exercices de crise Face…
L'IA : opportunité ou menace ? Les DSI de la finance s'interrogent Alors que l'intelligence…
Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le…
La transformation numérique du secteur financier n'a pas que du bon : elle augmente aussi…
This website uses cookies.