Can You Tell Which Image Is a Deepfake
Neuroscientists at the University of Sydney have proven that maybe you won’t be fooled by deepfakes – unless of course you trust your gut instead of your brain.
They’ve found that people’s brains can detect artificial intelligence (AI)-generated fake faces, even though people could not report which faces were real and which were fake.
Firstly, let’s explain what a deepfake actually is. To put it simply, the term deepfake is a portmanteau of “deep learning” and “fake” and refers to a computer program with the ability to attach a new face to someone else’s within a video. The difference between Snapchat’s face swapping feature, which uses similar technology, is that it’s starting to look more realistic. So naturally, people are concerned with the implications.
The neuroscientists performed two experiments: one behavioural and one using neuroimaging. In the behavioural experiment, participants were shown 50 images of real races and computer-generated deepfakes. They were asked to identify which were real and which were fake.
Then, a different group of participants were shown the same images while their brain activity was recorded using Electroencephalography (EEG), without knowing that half the images were fakes.
Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières Le 17…
L’Agence nationale de la sécurité des systèmes d'information (ANSSI) a publié un rapport sur les…
Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…
Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…
This website uses cookies.