In the 1866 novel Crime and Punishment, Russian writer Fyodor Dostoevsky drills straight into a dark and perplexing question: Is it ever acceptable for a human to take another human’s life?
More than a century and a half later, a fitting reinterpretation would cast Raskolnikov, the homicidal main character, as a robot. That’s because military analysts and human rights advocates have been battling over a newer moral frontier: Is it ever okay for a fully autonomous machine to kill a human?
The answer, right now, seems to be no — not in any official sense, but by informal, global consensus. That’s despite experts believing fully autonomous weapons have already been deployed on the battlefield in past years.
But that question may be pushed to the official forefront very quickly in Europe: Ukrainian officials are developing so-called “killer robots,” possibly to be used in the country’s fight against Russia. Military experts warn that the longer the war goes on — we’re approaching the one-year anniversary in February — the more likely we’ll see drones that can target, engage and kill targets without an actual human finger on the trigger.
Fully autonomous killer dronesare “a logical and inevitable next step” in weapons development, Mykhailo Fedorov, Ukraine’s digital transformation minister, told the Associated Press earlier this month. Ukraine has been doing “a lot” of research and development on the topic, and he believes “the potential for this is great in the next six months.”