Deepfakes: When seeing is no longer believing

deepfake
deepfake

Artificial intelligence (AI) has changed the way organizations identify, respond to and recover from cyberattacks. Concurrently, bad actors are weaponizing AI as both an attack vector and attack surface, adding to the growing list of digital vulnerabilities and blind spots in the insider risk space. In 2019, a reported 14,000 deepfake videos were found online, a 100% increase over those detected just one year prior.

One of the most prominent forms of AI exploited by bad actors today is a deepfake. To put it simply, a deepfake is a type of AI-generated media that depicts a person saying or doing something they did not say or do. In the growing digital world, media (i.e., visuals, images, audio, etc.) are used to inform decision-making. The intention behind the deepfake synthetic media is to deceive viewers, listeners and technology systems.

Business email compromise’s Gen-Z sibling

While many security leaders are aware of business email compromise (BEC) attacks, the weaponization of synthetic media like deepfakes threatens both the public and private sector through BEC’s Gen-Z sibling, business identity compromise (BIC) attacks. Through a BIC cyberattack, bad actors create synthetic, fictitious personas — or personas impersonating an existing employee — and strategically deploy them through one of the many forms of synthetic media to cause maximum damage to their target.

  • Deepfake video: Deepfake videos are created using AI, machine learning, and face-swapping software. These computer-generated videos combine images to create new video footage that depict people, statements and/or events that never actually happened. These videos can have wide-reaching effects: former President Donald Trump shared a video deepfake of House Speaker Nancy Pelosi on his Twitter account in which Speaker Pelosi appears to be impaired and possibly under the influence of a substance. This video picked up 2.5 million views on Facebook alone.
  • Deepfake audio: The first-known audio deepfake was used to impersonate the voice of a CEO of a U.K.-based energy firm, demanding a transfer of €220,000. Audio deepfakes are a type of AI that produces hyper-realistic, but synthetically generated speech.
  • Textual deepfake: The early days of AI and natural language processing (NLP) painted a challenging picture for a future where machines could write like a human being. Fast-forward to 2022, robust libraries of language models have grown over the years, and machines can now generate text-based communications mirroring that of a human being. Former OpenAI Policy Director Jake Clark cautioned the U.S. House Permanent Select Committee on Intelligence in 2019, asserting that textual deepfakes significantly aid in the production of “fake news,” misinformation and disinformation, as well as the impersonation of fictitious online personas that spread propaganda.
  • Deepfakes on social media: Synthetic media can be generated in a variety of ways. The most popular of these techniques deployed on social media are profile images developed via generative adversarial networks (GANs). Social media user “Katie Jones” appeared to be well connected in the Washington D.C. political scene, linked to everyone from an economist to a Deputy Assistant Secretary of State and a Senior Congressional Aide. There are two glaring issues with “Katie Jones” — firstly, she isn’t real, and secondly, the real person operating this account was determined to be a state-sponsored actor targeting the U.S. The AI-generated, completely synthetic image serving as the “face” behind “Katie Jones” is a synthetically generated image — a GAN.

Read more