Earlier this month, a Hong Kong company lost HK$200 million (A$40 million) in a deepfake scam. An employee transferred funds following a video conference call with scammers who looked and sounded like senior company officials.
Generative AI tools can create image, video and voice replicas of real people saying and doing things they never would have done. And these tools are becoming increasingly easy to access and use.
This can perpetuate intimate image abuse (including things like “revenge porn”) and disrupt democratic processes. Currently, many jurisdictions are grappling with how to regulate AI deepfakes.
But if you’ve been a victim of a deepfake scam, can you obtain compensation or redress for your losses? The legislation hasn’t caught up yet.
In most cases of deepfake fraud, scammers will avoid trying to fool banks and security systems, instead opting for so-called “push payment” frauds where victims are tricked into directing their bank to pay the fraudster.
So, if you’re seeking a remedy, there are at least four possible targets:
The quick answer is that once the fraudster vanishes, it is currently unclear whether you have a right to a remedy from any of these other parties (though that may change in the future).
Let’s see why.
Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…
Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…
📡 Objets connectés : des alliés numériques aux risques bien réels Les objets connectés (IoT)…
Identifier les signes d'une cyberattaque La vigilance est essentielle pour repérer rapidement une intrusion. Certains…
This website uses cookies.