Cybersecurity

See No Evil, Hear No Evil: The Use of Deepfakes in Social Engineering Attacks

Artificial Intelligence (AI) is one of the most high-profile technology developments in recent history. It would appear that there is no end to what AI can do. Fom driverless cars, dictation tools, translator apps, predictive analytics and application tracking, as well as retail tools such as smart shelves and carts to apps that help people with disabilities, AI can be a powerful component of wonderful tech products and services. But it can also be used for nefarious purposes, and ethical considerations around the use of AI are in their infancy.

In their book, Tools and Weapons, the authors talk about the need for ethics, and with a good reason. Many AI services and products have come to face some scrutiny because they have negatively impacted certain populations such as by exhibiting racial and gender bias or by making flawed predictions.

Voice Cloning and Deepfakes

Now, with AI-powered voice technology, anyone can clone a voice. This is exactly what happened to Bill Gates, whose voice was cloned by Facebook engineers – probably without his consent. Voice cloning is already being used for fraud. In 2019, fraudsters cloned a voice of a chief executive and successfully tricked a CEO into transferring a substantial sum of money. Similar crimes have emerged using the same technology.

Voice cloning is not the only concern of AI technology. The combination of voice cloning and video has given rise to what is known as deepfakes. With the help of software, anyone can create convincing and often hard-to-authenticate images or videos of someone else. This has cybersecurity experts worried both because this technology is open source, making it available to anyone with skill and imagination, and because it is still largely unregulated, making it easy to use for nefarious purposes.

Similar to the Bill Gates voice cloning demonstration, a deep fake of Belgian Premier Sophie Wilmès speaking about COVID-19 was released by a political group. One potential area of harm associated with deepfakes is the spreading of misinformation. Another problem is that it can influence the opinions of ordinary people who may trust and look up to public figures. Also, the person who is cloned can suffer loss of reputation, leading to loss of income or opportunities as well as psychological harm.

 

Veille-cyber

Share
Published by
Veille-cyber

Recent Posts

Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières

Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières Le 17…

10 heures ago

Cybersécurité des transports urbains : 123 incidents traités par l’ANSSI en cinq ans

L’Agence nationale de la sécurité des systèmes d'information (ANSSI) a publié un rapport sur les…

10 heures ago

Directive NIS 2 : Comprendre les obligations en cybersécurité pour les entreprises européennes

Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…

2 jours ago

NIS 2 : entre retard politique et pression cybersécuritaire, les entreprises dans le flou

Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…

3 jours ago

Quand l’IA devient l’alliée des hackers : le phishing entre dans une nouvelle ère

L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…

4 jours ago

APT36 frappe l’Inde : des cyberattaques furtives infiltrent chemins de fer et énergie

Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…

4 jours ago

This website uses cookies.