Foreign adversaries are expected to use AI algorithms to create increasingly realistic deepfakes and sow disinformation as part of military and intelligence operations as the technology improves.
Deepfakes describe a class of content generated by machine learning models capable of pasting someone’s face onto another person’s body realistically. They can be in the form of images or videos, and are designed to make people believe someone has said or done something they haven’t. The technology is often used to make false pornographic videos of female celebrities.
As the technology advances, however, the synthetic media has also been used to spread disinformation to fuel political conflicts. A video of Ukrainian President Volodymyr Zelensky urging soldiers to lay down their weapons and surrender, for example, surfaced shortly after Russia invaded the country, last year.
Zelensky denied he had said such things in a video posted on Facebook. Social media companies removed the videos in an attempt to stop false information from spreading.
But efforts to create deepfakes will continue to increase from enemy states, according to AI and foreign policy researchers from Northwestern University and the Brookings Institute in America.
A team of computer scientists from Northwestern University previously developed the Terrorism Reduction with Artificial Intelligence Deepfakes (TREAD) algorithm demonstrating a counterfeit video featuring the dead ISIS terrorist Mohammed al Adnani.