These A.I. faces are so realistic, humans can’t tell the difference, a new study
Computers have become very, very good at generating photorealistic images of human faces.
What could possibly go wrong?
A study published last week in the academic journal Proceedings of the National Academy of Sciences confirms just how convincing “faces” produced by artificial intelligence can be.
In that study, more than 300 research participants were asked to determine whether a supplied image was a photo of a real person or a fake generated by an A.I. The human participants got it right less than half the time. That’s worse than flipping a coin.
The results of this study reveal a tipping point for humans that should feel shocking to anybody who thinks they are savvy enough to spot a deepfake when it’s put up against the genuine article.
While the researchers say this feat of engineering “should be considered a success for the fields of computer graphics and vision,” they also “encourage those developing these technologies to consider whether the associated risks are greater than their benefits,” citing dangers that range from disinformation campaigns to the nonconsensual creation synthetic porn.
“[W]e discourage the development of technology simply because it is possible,” they contend.
Introduction La cybersécurité est devenue une priorité stratégique pour toutes les entreprises, grandes ou petites.…
Cybersécurité : les établissements de santé renforcent leur défense grâce aux exercices de crise Face…
La transformation numérique du secteur financier n'a pas que du bon : elle augmente aussi…
L'IA : opportunité ou menace ? Les DSI de la finance s'interrogent Alors que l'intelligence…
Telegram envisage de quitter la France : le chiffrement de bout en bout au cœur…
Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le…
This website uses cookies.