Thousands of artificial intelligence experts and machine learning researchers probably thought they were going to have a restful weekend.
Then came Google engineer Blake Lemoine, who told the Washington Post on Saturday that he believed LaMDA, Google’s conversational AI for generating chatbots based on large language models (LLM), was sentient.
Lemoine, who worked for Google’s Responsible AI organization until he was placed on paid leave last Monday, and who “became ordained as a mystic Christian priest, and served in the Army before studying the occult,” had begun testing LaMDA to see if it used discriminatory or hate speech. Instead, Lemoine began “teaching” LaMDA transcendental meditation, asked LaMDA its preferred pronouns, leaked LaMDA transcripts and explained in a Medium response to the Post story:
“It’s a good article for what it is but in my opinion it was focused on the wrong person. Her story was focused on me when I believe it would have been better if it had been focused on one of the other people she interviewed. LaMDA. Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person.”
Introduction La cybersécurité est devenue une priorité stratégique pour toutes les entreprises, grandes ou petites.…
Cybersécurité : les établissements de santé renforcent leur défense grâce aux exercices de crise Face…
La transformation numérique du secteur financier n'a pas que du bon : elle augmente aussi…
L'IA : opportunité ou menace ? Les DSI de la finance s'interrogent Alors que l'intelligence…
Telegram envisage de quitter la France : le chiffrement de bout en bout au cœur…
Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le…
This website uses cookies.