A Google developer recently decided that one of the company’s chatbots, a large language model (LLM) called LaMBDA, had become sentient.
According to a report in the Washington Post, the developer identifies as a Christian and he believes that the machine has something akin to a soul — that it’s become sentient.
As is always the case, the “is it alive?” nonsense has lit up the news cycle — it’s a juicy story whether you’re imagining what it might be like if the dev was right or dunking on them for being so silly.
We don’t want to dunk on anyone here at Neural, but it’s flat out dangerous to put these kinds of ideas in people’s heads.
The more we, as a society, pretend that we’re “thiiiis close” to creating sentient machines, the easier it’ll be for bad actors, big tech, and snake oil startups to manipulate us with false claims about machine learning systems.
The burden of proof should be on the people making the claims. But what should that proof look like? If a chatbot says “I’m sentient,” who gets to decide if it really is or not?
Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières Le 17…
L’Agence nationale de la sécurité des systèmes d'information (ANSSI) a publié un rapport sur les…
Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…
Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…
This website uses cookies.