In a world where seeing is increasingly no longer believing, experts are warning that society must take a multi-pronged approach to combat the potential harms of computer-generated media.
As Bill Whitaker reports this week on 60 Minutes, artificial intelligence can manipulate faces and voices to make it look like someone said something they never said. The result is videos of things that never happened, called « deepfakes. » Often, they look so real, people watching can’t tell. Just this month, Justin Bieber was tricked by a series of deepfake videos on the social media video platform TikTok that appeared to be of Tom Cruise.
These fabricated videos, named for a combination of the computer science practice known as « deep learning » and « fake, » first arrived on the internet near the end of 2017. The sophistication of deepfakes has advanced rapidly in the ensuing four years, along with the availability of the tools needed to make them.
Sécurité des mots de passe : bonnes pratiques pour éviter les failles La sécurité des…
Ransomware : comment prévenir et réagir face à une attaque Le ransomware est l’une des…
Cybersécurité et e-commerce : protéger vos clients et vos ventes En 2025, les sites e-commerce…
Les ransomwares : comprendre et se défendre contre cette menace En 2025, les ransomwares représentent…
RGPD et cybersécurité : comment rester conforme en 2025 Depuis sa mise en application en…
VPN : un outil indispensable pour protéger vos données Le VPN, ou « Virtual Private…
This website uses cookies.