Generative AI is making phishing attacks more dangerous

phishing
phishing

Cybercriminals are using AI chatbots such as ChatGPT to launch sophisticated business email compromise attacks. Cybersecurity practitioners must fight fire with fire.

As AI’s popularity grows and its usability expands, thanks to generative AI’s continuous improvement model, it is also becoming more embedded in the threat actor’s arsenal.

To mitigate increasingly sophisticated AI phishing attacks, cybersecurity practitioners must both understand how cybercriminals are using the technology and embrace AI and machine learning for defensive purposes.

AI phishing attacks

On the attack side, generative AI increases the effectiveness and impact of a variety of cyberthreats and phishing scams. Consider the following.

General phishing attacks

Generative AI can make traditional phishing attacks — via emails, direct messages and spurious websites — more realistic by eliminating spelling errors and grammatical mistakes and adopting convincingly professional writing styles.

Large language models (LLMs) can also absorb real-time information from news outlets, corporate websites and other sources. Incorporating of-the-moment details into phishing emails could both make the messages more believable and generate a sense of urgency that compels targets to act.

Finally, AI chatbots can create and spread business email compromise and other phishing campaigns at a much faster rate than humans ever could on their own, increasing the surface area of such attacks.

Source