ai regulation 2
Software engineer Blake Lemoine worked with Google’s Ethical AI team on Language Model for Dialog Applications (LaMDA), examining the large language model for bias on topics such as sexual orientation, gender, identity, ethnicity, and religion
Over the course of several months, Lemoine, who identifies as a Christian mystic, hypothesized that LaMDA was a living being, based on his spiritual beliefs. Lemoine published transcripts of his conversations with LaMDA and blogs about AI ethics surrounding LaMDA.
In June, Google put Lemoine on administrative leave; last week, he was fired. In a statement, Google said Lemoine’s claims that LaMDA is sentient are « wholly unfounded. »
« It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information, » Google said in a statement. « We will continue our careful development of language models, and we wish Blake well. »
Sécurité des mots de passe : bonnes pratiques pour éviter les failles La sécurité des…
Ransomware : comment prévenir et réagir face à une attaque Le ransomware est l’une des…
Cybersécurité et e-commerce : protéger vos clients et vos ventes En 2025, les sites e-commerce…
Les ransomwares : comprendre et se défendre contre cette menace En 2025, les ransomwares représentent…
RGPD et cybersécurité : comment rester conforme en 2025 Depuis sa mise en application en…
VPN : un outil indispensable pour protéger vos données Le VPN, ou « Virtual Private…
This website uses cookies.