People are more aware of disinformation than they used to be. According to one recent poll, nine out of 10 American adults fact-check their news, and 96% want to limit the spread of false information.
But it’s becoming tougher — not easier — to stem the firehose of disinformation with the advent of generative AI tools.
That was the high-level takeaway from the disinformation and AI panel on the AI Stage at TechCrunch Disrupt 2023, which featured Sarah Brandt, the EVP of partnerships at NewsGuard, and Andy Parsons, the senior director of the Content Authenticity Initiative (CAI) at Adobe. The panelists spoke about the threat of AI-generated disinformation and potential solutions as an election year looms.
Parsons framed the stakes in fairly stark terms:
Without a core foundation and objective truth that we can share, frankly — without exaggeration — democracy is at stake. Being able to have objective conversations with other humans about shared truth is at stake.
Both Brandt and Parsons acknowledged that web-borne disinformation, AI-assisted or no, is hardly a new phenomenon. Parsons referred to the 2019 viral clip of former House Speaker Nancy Pelosi (D-CA), which used crude editing to make it appear as though Pelosi was speaking in a slurred, awkward way.
But Brandt also noted that — thanks to AI, particularly generative AI — it’s becoming a lot cheaper and simpler to generate and distribute disinformation on a massive scale.