Intelligence Artificielle

Are AI models doomed to always hallucinate?

Large language models (LLMs) like OpenAI’s ChatGPT all suffer from the same problem: they make stuff up.

The mistakes range from strange and innocuous — like claiming that the Golden Gate Bridge was transported across Egypt in 2016 — to highly problematic, even dangerous.

A mayor in Australia recently threatened to sue OpenAI because ChatGPT mistakenly claimed he pleaded guilty in a major bribery scandal. Researchers have found that LLM hallucinations can be exploited to distribute malicious code packages to unsuspecting software developers. And LLMs frequently give bad mental health and medical advice, like that wine consumption can “prevent cancer.”

This tendency to invent “facts” is a phenomenon known as hallucination, and it happens because of the way today’s LLMs — and all generative AI models, for that matter — are developed and trained.

Training models

Generative AI models have no real intelligence — they’re statistical systems that predict words, images, speech, music or other data. Fed an enormous number of examples, usually sourced from the public web, AI models learn how likely data is to occur based on patterns, including the context of any surrounding data.

For example, given a typical email ending in the fragment “Looking forward…”, an LLM might complete it with “… to hearing back” — following the pattern of the countless emails it’s been trained on. It doesn’t mean the LLM is looking forward to anything.

“The current framework of training LLMs involves concealing, or ‘masking,’ previous words for context” and having the model predict which words should replace the concealed ones, Sebastian Berns, a Ph.D. researchers at Queen Mary University of London, told TechCrunch in an email interview. “This is conceptually similar to using predictive text in iOS and continually pressing one of the suggested next words.”

Source

Veille-cyber

Share
Published by
Veille-cyber

Recent Posts

Sécurité des mots de passe : bonnes pratiques pour éviter les failles

Sécurité des mots de passe : bonnes pratiques pour éviter les failles La sécurité des…

6 jours ago

Ransomware : comment prévenir et réagir face à une attaque

Ransomware : comment prévenir et réagir face à une attaque Le ransomware est l’une des…

6 jours ago

Cybersécurité et e-commerce : protéger vos clients et vos ventes

Cybersécurité et e-commerce : protéger vos clients et vos ventes En 2025, les sites e-commerce…

1 semaine ago

Les ransomwares : comprendre et se défendre contre cette menace

Les ransomwares : comprendre et se défendre contre cette menace En 2025, les ransomwares représentent…

1 semaine ago

RGPD et cybersécurité : comment rester conforme en 2025

RGPD et cybersécurité : comment rester conforme en 2025 Depuis sa mise en application en…

2 semaines ago

VPN : un outil indispensable pour protéger vos données

VPN : un outil indispensable pour protéger vos données Le VPN, ou « Virtual Private…

2 semaines ago

This website uses cookies.