Cybersecurity

See No Evil, Hear No Evil: The Use of Deepfakes in Social Engineering Attacks

Artificial Intelligence (AI) is one of the most high-profile technology developments in recent history. It would appear that there is no end to what AI can do. Fom driverless cars, dictation tools, translator apps, predictive analytics and application tracking, as well as retail tools such as smart shelves and carts to apps that help people with disabilities, AI can be a powerful component of wonderful tech products and services. But it can also be used for nefarious purposes, and ethical considerations around the use of AI are in their infancy.

In their book, Tools and Weapons, the authors talk about the need for ethics, and with a good reason. Many AI services and products have come to face some scrutiny because they have negatively impacted certain populations such as by exhibiting racial and gender bias or by making flawed predictions.

Voice Cloning and Deepfakes

Now, with AI-powered voice technology, anyone can clone a voice. This is exactly what happened to Bill Gates, whose voice was cloned by Facebook engineers – probably without his consent. Voice cloning is already being used for fraud. In 2019, fraudsters cloned a voice of a chief executive and successfully tricked a CEO into transferring a substantial sum of money. Similar crimes have emerged using the same technology.

Voice cloning is not the only concern of AI technology. The combination of voice cloning and video has given rise to what is known as deepfakes. With the help of software, anyone can create convincing and often hard-to-authenticate images or videos of someone else. This has cybersecurity experts worried both because this technology is open source, making it available to anyone with skill and imagination, and because it is still largely unregulated, making it easy to use for nefarious purposes.

Similar to the Bill Gates voice cloning demonstration, a deep fake of Belgian Premier Sophie Wilmès speaking about COVID-19 was released by a political group. One potential area of harm associated with deepfakes is the spreading of misinformation. Another problem is that it can influence the opinions of ordinary people who may trust and look up to public figures. Also, the person who is cloned can suffer loss of reputation, leading to loss of income or opportunities as well as psychological harm.

 

Veille-cyber

Share
Published by
Veille-cyber

Recent Posts

Sécurité des mots de passe : bonnes pratiques pour éviter les failles

Sécurité des mots de passe : bonnes pratiques pour éviter les failles La sécurité des…

4 jours ago

Ransomware : comment prévenir et réagir face à une attaque

Ransomware : comment prévenir et réagir face à une attaque Le ransomware est l’une des…

5 jours ago

Cybersécurité et e-commerce : protéger vos clients et vos ventes

Cybersécurité et e-commerce : protéger vos clients et vos ventes En 2025, les sites e-commerce…

1 semaine ago

Les ransomwares : comprendre et se défendre contre cette menace

Les ransomwares : comprendre et se défendre contre cette menace En 2025, les ransomwares représentent…

1 semaine ago

RGPD et cybersécurité : comment rester conforme en 2025

RGPD et cybersécurité : comment rester conforme en 2025 Depuis sa mise en application en…

1 semaine ago

VPN : un outil indispensable pour protéger vos données

VPN : un outil indispensable pour protéger vos données Le VPN, ou « Virtual Private…

2 semaines ago

This website uses cookies.