After a slow build over the past decade, new capabilities of artificial intelligence (AI) and chatbots are starting to make waves across a variety of industries. The Spring 2022 release of OpenAI’s DALL-E 2 image generator wowed users with its ability to create nearly any conceivable image based on a natural language description, even as it set off warning bells for graphic designers and creatives around the world. That reaction was nothing, however, compared to the Fall 2022 release of OpenAI’s ChatGPT text generator. With just a few input prompts, end users could instruct ChatGPT to spit out poems, essays, fiction, speeches and even blocks of software code – serving notice to everyone from writers to software programmers that serious change was afoot. While these new developments are starting to feel like a sea change in what AI is capable of, not everyone is celebrating these advancements. Experts in cybersecurity, for instance, have cautioned that these same improvements are concurrently being used to carry out cyberattacks more efficiently by generating convincing-sounding phishing emails and malicious code with just a few keystrokes.
This is just one area of AI giving people pause as they start to wrap their heads around just how advanced today’s AI tools are – while wondering if there is a future Terminator war on the horizon in which AI will be responsible for the downfall of civilization.
Luckily, we don’t need to resign ourselves to doom-and-gloom scenarios just yet – but we do need a new approach to this fast-evolving landscape in which we find ourselves. Identity is a popular attack vector for many new forms of AI, which means that enterprises need to ensure that they can firmly establish digital trust in this new world.
A Matter of Trust
In order to combat AI-based attacks, it’s first crucial to understand how AI is being used to impersonate digital identities. AI has been serving as the technological backbone for so-called “deepfake” tools that can convincingly clone the voices and images of people.