Synthetic media, better known as deepfakes, could be a goldmine for filmmakers. But the technology has already terrorized women who have had their faces inserted into pornography. And it could potentially disrupt society.
You may never have heard the term « synthetic media »— more commonly known as « deepfakes »— but our military, law enforcement and intelligence agencies certainly have. They are hyper-realistic video and audio recordings that use artificial intelligence and « deep » learning to create « fake » content or « deepfakes. » The U.S. government has grown increasingly concerned about their potential to be used to spread disinformation and commit crimes. That’s because the creators of deepfakes have the power to make people say or do anything, at least on our screens. Most Americans have no idea how far the technology has come in just the last four years or the danger, disruption and opportunities that come with it.
Mots-clés : cybersécurité, sécurité informatique, protection des données, menaces cybernétiques, veille cyber, analyse de vulnérabilités, sécurité des réseaux, cyberattaques, conformité RGPD, NIS2, DORA, PCIDSS, DEVSECOPS, eSANTE, intelligence artificielle, IA en cybersécurité, apprentissage automatique, deep learning, algorithmes de sécurité, détection des anomalies, systèmes intelligents, automatisation de la sécurité, IA pour la prévention des cyberattaques.






