- Deepfakes are a type of synthetic content, generated with the aid of artificial intelligence, that can depict scenes and events that did not happen.
- Images and videos can often contain clues and inconsistencies that can expose their artifice, though more advanced versions are much harder to detect.
- Analyzing sources to establish the provenance of a piece of video, audio or text is a simple and effective first step to debunking many existing deepfakes.
- Special programs and tools, some of which use machine learning, exist to help identify deepfakes, though such analysis provides only a degree of certainty and could yield false negatives.
- Educating regulators and political leaders about the issue of deepfakes remains the most pertinent challenge.
Atrickle of AI-fueled misinformation has turned into a powerful stream over the past year, with fake photos and videos—from Donald Trump‘s and Vladimir Putin‘s « arrest » to the Pope’s « gangsta » outfit—highlighting the scope of the problem.
« Deepfake » is an umbrella term for various types of synthetic content, created or altered with the aid of artificial intelligence, which can appear to show events, scenes or conversations that never happened.
These types of creations come in a variety of visual, audial, and textual forms and can feature something innocuous, such as Jim Carrey in The Shining, or far more sinister and dangerous—like the fake videos of Joe Biden‘s « address to the nation, » for example.