deepfake
Artificial intelligence (AI) has changed the way organizations identify, respond to and recover from cyberattacks. Concurrently, bad actors are weaponizing AI as both an attack vector and attack surface, adding to the growing list of digital vulnerabilities and blind spots in the insider risk space. In 2019, a reported 14,000 deepfake videos were found online, a 100% increase over those detected just one year prior.
One of the most prominent forms of AI exploited by bad actors today is a deepfake. To put it simply, a deepfake is a type of AI-generated media that depicts a person saying or doing something they did not say or do. In the growing digital world, media (i.e., visuals, images, audio, etc.) are used to inform decision-making. The intention behind the deepfake synthetic media is to deceive viewers, listeners and technology systems.
While many security leaders are aware of business email compromise (BEC) attacks, the weaponization of synthetic media like deepfakes threatens both the public and private sector through BEC’s Gen-Z sibling, business identity compromise (BIC) attacks. Through a BIC cyberattack, bad actors create synthetic, fictitious personas — or personas impersonating an existing employee — and strategically deploy them through one of the many forms of synthetic media to cause maximum damage to their target.
Panorama des menaces cyber en 2025 : Implications pour les entreprises françaises à l'ère de…
Introduction L'adoption croissante des technologies d'intelligence artificielle dans le secteur de la santé offre des…
La révolution IA dans le secteur de la santé : nouveaux défis de cybersécurité La…
En tant que PME sous-traitante de grands groupes, vous connaissez trop bien ce scénario :…
Votre entreprise vient de subir une cyberattaque. Dans le feu de l'action, vous avez mobilisé…
"Mais concrètement, à quoi sert un scanner de vulnérabilité pour une entreprise comme la nôtre?"…
This website uses cookies.