Earlier this month, a Hong Kong company lost HK$200 million (A$40 million) in a deepfake scam. An employee transferred funds following a video conference call with scammers who looked and sounded like senior company officials.
Generative AI tools can create image, video and voice replicas of real people saying and doing things they never would have done. And these tools are becoming increasingly easy to access and use.
This can perpetuate intimate image abuse (including things like “revenge porn”) and disrupt democratic processes. Currently, many jurisdictions are grappling with how to regulate AI deepfakes.
But if you’ve been a victim of a deepfake scam, can you obtain compensation or redress for your losses? The legislation hasn’t caught up yet.
In most cases of deepfake fraud, scammers will avoid trying to fool banks and security systems, instead opting for so-called “push payment” frauds where victims are tricked into directing their bank to pay the fraudster.
So, if you’re seeking a remedy, there are at least four possible targets:
The quick answer is that once the fraudster vanishes, it is currently unclear whether you have a right to a remedy from any of these other parties (though that may change in the future).
Let’s see why.
L'IA : opportunité ou menace ? Les DSI de la finance s'interrogent Alors que l'intelligence…
Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le…
La transformation numérique du secteur financier n'a pas que du bon : elle augmente aussi…
Telegram envisage de quitter la France : le chiffrement de bout en bout au cœur…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
TISAX® et ISO 27001 sont toutes deux des normes dédiées à la sécurité de l’information. Bien qu’elles aient…
This website uses cookies.