These A.I. faces are so realistic, humans can’t tell the difference, a new study
Computers have become very, very good at generating photorealistic images of human faces.
What could possibly go wrong?
A study published last week in the academic journal Proceedings of the National Academy of Sciences confirms just how convincing “faces” produced by artificial intelligence can be.
In that study, more than 300 research participants were asked to determine whether a supplied image was a photo of a real person or a fake generated by an A.I. The human participants got it right less than half the time. That’s worse than flipping a coin.
The results of this study reveal a tipping point for humans that should feel shocking to anybody who thinks they are savvy enough to spot a deepfake when it’s put up against the genuine article.
While the researchers say this feat of engineering “should be considered a success for the fields of computer graphics and vision,” they also “encourage those developing these technologies to consider whether the associated risks are greater than their benefits,” citing dangers that range from disinformation campaigns to the nonconsensual creation synthetic porn.
“[W]e discourage the development of technology simply because it is possible,” they contend.
Sécurité des mots de passe : bonnes pratiques pour éviter les failles La sécurité des…
Ransomware : comment prévenir et réagir face à une attaque Le ransomware est l’une des…
Cybersécurité et e-commerce : protéger vos clients et vos ventes En 2025, les sites e-commerce…
Les ransomwares : comprendre et se défendre contre cette menace En 2025, les ransomwares représentent…
RGPD et cybersécurité : comment rester conforme en 2025 Depuis sa mise en application en…
VPN : un outil indispensable pour protéger vos données Le VPN, ou « Virtual Private…
This website uses cookies.