killer robot 1
You might have heard of killer robots, slaughterbots or terminators – officially called lethal autonomous weapons (LAWs) – from films and books. And the idea of super-intelligent weapons running rampant is still science fiction. But as AI weapons become increasingly sophisticated, public concern is growing over fears about lack of accountability and the risk of technical failure.
Already we have seen how so-called neutral AI have made sexist algorithms and inept content moderation systems, largely because their creators did not understand the technology. But in war, these kinds of misunderstandings could kill civilians or wreck negotiations.
For example, a target recognition algorithm could be trained to identify tanks from satellite imagery. But what if all of the images used to train the system featured soldiers in formation around the tank? It might mistake a civilian vehicle passing through a military blockade for a target.
Civilians in many countries (such as Vietnam, Afghanistan and Yemen) have suffered because of the way global superpowers build and use increasingly advanced weapons. Many people would argue they have done more harm than good, most recently pointing to the Russian invasion of Ukraine early in 2022.
Introduction La cybersécurité est devenue une priorité stratégique pour toutes les entreprises, grandes ou petites.…
Cybersécurité : les établissements de santé renforcent leur défense grâce aux exercices de crise Face…
La transformation numérique du secteur financier n'a pas que du bon : elle augmente aussi…
L'IA : opportunité ou menace ? Les DSI de la finance s'interrogent Alors que l'intelligence…
Telegram envisage de quitter la France : le chiffrement de bout en bout au cœur…
Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le…
This website uses cookies.