ai regulation 2
Software engineer Blake Lemoine worked with Google’s Ethical AI team on Language Model for Dialog Applications (LaMDA), examining the large language model for bias on topics such as sexual orientation, gender, identity, ethnicity, and religion
Over the course of several months, Lemoine, who identifies as a Christian mystic, hypothesized that LaMDA was a living being, based on his spiritual beliefs. Lemoine published transcripts of his conversations with LaMDA and blogs about AI ethics surrounding LaMDA.
In June, Google put Lemoine on administrative leave; last week, he was fired. In a statement, Google said Lemoine’s claims that LaMDA is sentient are « wholly unfounded. »
« It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information, » Google said in a statement. « We will continue our careful development of language models, and we wish Blake well. »
Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières Le 17…
L’Agence nationale de la sécurité des systèmes d'information (ANSSI) a publié un rapport sur les…
Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…
Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…
This website uses cookies.