AI-generated photos, also known as synthetic media, are being used to create fake experts and journalists to spread disinformation.
- Synthetic media is a broad, umbrella term that includes, among other things, the use of AI to generate “deepfake” photos.
- This technology has advanced rapidly. Malevolent actors are using it to spread extremely convincing propaganda.
- Governments and Big Tech companies are trying to fight back, but there will be unintended consequences.
In the last few years, many strategies and tactics have been used to generate and spread online misinformation. But a recent approach that taps into the power of artificial intelligence to create photos with high accuracy of fictitious personas that purport to be journalists or field experts poses a serious and novel threat to our society.
The AI-generated characters fall under a broad umbrella called synthetic media that relies on a technique called generative adversarial network (GAN), in which two networks compete to create photos that are cross-checked to determine whether they are realistic or not. Many websites and applications are now available to generate these photos without the need of any technical background, and they are incredibly convincing.