Cybersecurity

Here Come the AI Worms

As generative AI systems like OpenAI’s ChatGPT and Google’s Gemini become more advanced, they are increasingly being put to work. Startups and tech companies are building AI agents and ecosystems on top of the systems that can complete boring chores for you: think automatically making calendar bookings and potentially buying products. But as the tools are given more freedom, it also increases the potential ways they can be attacked.

Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers have created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process. “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn’t been seen before,” says Ben Nassi, a Cornell Tech researcher behind the research.

Nassi, along with fellow researchers Stav Cohen and Ron Bitton, created the worm, dubbed Morris II, as a nod to the original Morris computer worm that caused chaos across the internet in 1988. In a research paper and website shared exclusively with WIRED, the researchers show how the AI worm can attack a generative AI email assistant to steal data from emails and send spam messages—breaking some security protections in ChatGPT and Gemini in the process.

The research, which was undertaken in test environments and not against a publicly available email assistant, comes as large language models (LLMs) are increasingly becoming multimodal, being able to generate images and video as well as text. While generative AI worms haven’t been spotted in the wild yet, multiple researchers say they are a security risk that startups, developers, and tech companies should be concerned about.

Source

Mots-clés : cybersécurité, sécurité informatique, protection des données, menaces cybernétiques, veille cyber, analyse de vulnérabilités, sécurité des réseaux, cyberattaques, conformité RGPD, NIS2, DORA, PCIDSS, DEVSECOPS, eSANTE, intelligence artificielle, IA en cybersécurité, apprentissage automatique, deep learning, algorithmes de sécurité, détection des anomalies, systèmes intelligents, automatisation de la sécurité, IA pour la prévention des cyberattaques.

Veille-cyber

Share
Published by
Veille-cyber

Recent Posts

Bots et IA biaisées : menaces pour la cybersécurité

Bots et IA biaisées : une menace silencieuse pour la cybersécurité des entreprises Introduction Les…

5 jours ago

Cloudflare en Panne

Cloudflare en Panne : Causes Officielles, Impacts et Risques pour les Entreprises  Le 5 décembre…

5 jours ago

Alerte sur le Malware Brickstorm : Une Menace pour les Infrastructures Critiques

Introduction La cybersécurité est aujourd’hui une priorité mondiale. Récemment, la CISA (Cybersecurity and Infrastructure Security…

5 jours ago

Cloud Computing : État de la menace et stratégies de protection

  La transformation numérique face aux nouvelles menaces Le cloud computing s’impose aujourd’hui comme un…

6 jours ago

Attaque DDoS record : Cloudflare face au botnet Aisuru – Une analyse de l’évolution des cybermenaces

Les attaques par déni de service distribué (DDoS) continuent d'évoluer en sophistication et en ampleur,…

6 jours ago

Poèmes Pirates : La Nouvelle Arme Contre Votre IA

Face à l'adoption croissante des technologies d'IA dans les PME, une nouvelle menace cybersécuritaire émerge…

6 jours ago

This website uses cookies.