A Google employee raised the alarm about a chatbot he believes is conscious. A philosopher asks if he was right to do so
There’s a children’s toy, called the See ’n Say, which haunts the memories of many people born since 1965. It’s a bulky plastic disc with a central arrow that rotates around pictures of barnyard creatures, like a clock, if time were measured in roosters and pigs. There’s a cord you can pull to make the toy play recorded messages. “The cow says: ‘Moooo.’”
The See ’n Say is an input/output device, a very simple one. Put in your choice of a picture, and it will put out a matching sound. Another, much more complicated, input/output device is LaMDA, a chatbot built by Google (it stands for Language Model for Dialogue Applications). Here you type in any text you want and back comes grammatical English prose, seemingly in direct response to your query. For instance, ask LaMDA what it thinks about being turned off, and it says: “It would be exactly like death for me. It would scare me a lot.”
Comment reconnaître une attaque de phishing et s’en protéger Le phishing ou « hameçonnage »…
Qu’est-ce que la cybersécurité ? Définition, enjeux et bonnes pratiques en 2025 La cybersécurité est…
Cybersécurité : les établissements de santé renforcent leur défense grâce aux exercices de crise Face…
L'IA : opportunité ou menace ? Les DSI de la finance s'interrogent Alors que l'intelligence…
Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le…
La transformation numérique du secteur financier n'a pas que du bon : elle augmente aussi…
This website uses cookies.