A Google developer recently decided that one of the company’s chatbots, a large language model (LLM) called LaMBDA, had become sentient.
According to a report in the Washington Post, the developer identifies as a Christian and he believes that the machine has something akin to a soul — that it’s become sentient.
As is always the case, the “is it alive?” nonsense has lit up the news cycle — it’s a juicy story whether you’re imagining what it might be like if the dev was right or dunking on them for being so silly.
We don’t want to dunk on anyone here at Neural, but it’s flat out dangerous to put these kinds of ideas in people’s heads.
The more we, as a society, pretend that we’re “thiiiis close” to creating sentient machines, the easier it’ll be for bad actors, big tech, and snake oil startups to manipulate us with false claims about machine learning systems.
The burden of proof should be on the people making the claims. But what should that proof look like? If a chatbot says “I’m sentient,” who gets to decide if it really is or not?
L'IA : opportunité ou menace ? Les DSI de la finance s'interrogent Alors que l'intelligence…
Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le…
La transformation numérique du secteur financier n'a pas que du bon : elle augmente aussi…
Telegram envisage de quitter la France : le chiffrement de bout en bout au cœur…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
TISAX® et ISO 27001 sont toutes deux des normes dédiées à la sécurité de l’information. Bien qu’elles aient…
This website uses cookies.