In the video record of the Ukraine war, a clumsy attempt to “deepfake” Ukrainian President Volodymyr Zelensky coexists alongside critical on-the-ground video evidence of abuses, pervasive misinformation for grift and attention, and Russian false flag operations.
These scenes from the war provide a glimpse into a future where, alongside existing forms of manipulation and misattribution, deepfake technology — images that have been “convincingly altered and manipulated to misrepresent someone doing or saying something that was not actually done or said” — will be more readily employed. More false videos will be forged and the ‘liar’s dividend’ will be used to cast doubt on authentic videos.
One set of solutions to these current and future problems proposes to better track where media comes from, what is synthesized, edited or changed, and how. This ‘authenticity and provenance’ infrastructure deserves close attention to its possibilities and preventative work on its risks.
In January, the Coalition for Content Provenance and Authenticity (C2PA) led by the BBC, Microsoft, Adobe, Intel, Twitter, TruePic, Sony and Arm, proposed the first global technical standards for better tracking what content is authentic and what is manipulated. The specifications provide a way to follow the origins and changes to a piece of media content from capture on a camera to editing to distribution by major media outlets or on a social media feed. Companies are