All hell broke loose in the AI world after The Washington Post reported last week that a Google engineer thought that LaMDA, one of the company’s large language models (LLM), was sentient.
The news was followed by a frenzy of articles, videos and social media debates over whether current AI systems understand the world as we do, whether AI systems can be conscious, what are the requirements for consciousness, etc.
We are currently in a state where our large language models have become good enough to convince many people — including engineers — that they are on par with natural intelligence. At the same time, they are still bad enough to make dumb mistakes, as these experiments by computer scientist Ernest Davis show.
What makes this concerning is that research and development on LLMs is mostly controlled by large tech companies that are looking to commercialize their technology by integrating it into applications used by hundreds of millions of users. And it is important that these applications remain safe and robust to avoid confusing or harming their users.
Sécurité des mots de passe : bonnes pratiques pour éviter les failles La sécurité des…
Ransomware : comment prévenir et réagir face à une attaque Le ransomware est l’une des…
Cybersécurité et e-commerce : protéger vos clients et vos ventes En 2025, les sites e-commerce…
Les ransomwares : comprendre et se défendre contre cette menace En 2025, les ransomwares représentent…
RGPD et cybersécurité : comment rester conforme en 2025 Depuis sa mise en application en…
VPN : un outil indispensable pour protéger vos données Le VPN, ou « Virtual Private…
This website uses cookies.