ai regulation 2
Software engineer Blake Lemoine worked with Google’s Ethical AI team on Language Model for Dialog Applications (LaMDA), examining the large language model for bias on topics such as sexual orientation, gender, identity, ethnicity, and religion
Over the course of several months, Lemoine, who identifies as a Christian mystic, hypothesized that LaMDA was a living being, based on his spiritual beliefs. Lemoine published transcripts of his conversations with LaMDA and blogs about AI ethics surrounding LaMDA.
In June, Google put Lemoine on administrative leave; last week, he was fired. In a statement, Google said Lemoine’s claims that LaMDA is sentient are « wholly unfounded. »
« It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information, » Google said in a statement. « We will continue our careful development of language models, and we wish Blake well. »
Introduction La cybersécurité est devenue une priorité stratégique pour toutes les entreprises, grandes ou petites.…
Cybersécurité : les établissements de santé renforcent leur défense grâce aux exercices de crise Face…
La transformation numérique du secteur financier n'a pas que du bon : elle augmente aussi…
L'IA : opportunité ou menace ? Les DSI de la finance s'interrogent Alors que l'intelligence…
Telegram envisage de quitter la France : le chiffrement de bout en bout au cœur…
Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le…
This website uses cookies.