deepfake
The generative AI industry will be worth about A$22 trillion by 2030, according to the CSIRO. These systems – of which ChatGPT is currently the best known – can write essays and code, generate music and artwork, and have entire conversations. But what happens when they’re turned to illegal uses?
Last week, the streaming community was rocked by a headline that links back to the misuse of generative AI. Popular Twitch streamer Atrioc issued an apology video, teary eyed, after being caught viewing pornography with the superimposed faces of other women streamers.
The “deepfake” technology needed to Photoshop a celebrity’s head on a porn actor’s body has been around for a while, but recent advances have made it much harder to detect.
And that’s the tip of the iceberg. In the wrong hands, generative AI could do untold damage. There’s a lot we stand to lose, should laws and regulation fail to keep up.
Last month, generative AI app Lensa came under fire for allowing its system to create fully nude and hyper-sexualised images from users’ headshots. Controversially, it also whitened the skin of women of colour and made their features more European.
The backlash was swift. But what’s relatively overlooked is the vast potential to use artistic generative AI in scams. At the far end of the spectrum, there are reports of these tools being able to fake fingerprints and facial scans (the method most of us use to lock our phones).
Source
Sécurité des mots de passe : bonnes pratiques pour éviter les failles La sécurité des…
Ransomware : comment prévenir et réagir face à une attaque Le ransomware est l’une des…
Cybersécurité et e-commerce : protéger vos clients et vos ventes En 2025, les sites e-commerce…
Les ransomwares : comprendre et se défendre contre cette menace En 2025, les ransomwares représentent…
RGPD et cybersécurité : comment rester conforme en 2025 Depuis sa mise en application en…
VPN : un outil indispensable pour protéger vos données Le VPN, ou « Virtual Private…
This website uses cookies.