Scams, deepfake porn and romance bots

deepfake
deepfake

The generative AI industry will be worth about A$22 trillion by 2030, according to the CSIRO. These systems – of which ChatGPT is currently the best known – can write essays and code, generate music and artwork, and have entire conversations. But what happens when they’re turned to illegal uses?

Last week, the streaming community was rocked by a headline that links back to the misuse of generative AI. Popular Twitch streamer Atrioc issued an apology video, teary eyed, after being caught viewing pornography with the superimposed faces of other women streamers.

The “deepfake” technology needed to Photoshop a celebrity’s head on a porn actor’s body has been around for a while, but recent advances have made it much harder to detect.

And that’s the tip of the iceberg. In the wrong hands, generative AI could do untold damage. There’s a lot we stand to lose, should laws and regulation fail to keep up.

From controversy to outright crime

Last month, generative AI app Lensa came under fire for allowing its system to create fully nude and hyper-sexualised images from users’ headshots. Controversially, it also whitened the skin of women of colour and made their features more European.

The backlash was swift. But what’s relatively overlooked is the vast potential to use artistic generative AI in scams. At the far end of the spectrum, there are reports of these tools being able to fake fingerprints and facial scans (the method most of us use to lock our phones).

Source