- Autonomous machines and decision-making can lead to potentially fatal errors.
- Deaths occurring due to robotic errors produce moral dilemmas, much like “the trolley problem.”
- There is a case that many lives could be saved if society embraces machine learning and commits itself to deploying robotics technologies responsibly.
Advances in robotics mean autonomous vehicles, industrial robots and medical robots will be more capable, independent and pervasive over the next 20 years. Eventually, these autonomous machines could make decision-making errors that lead to hundreds of thousands of deaths, which could be avoided if humans were in the loop.
Such a future is reasonably frightening but more lives would be saved than lost if society adopts robotic technologies responsibly.
The machine learning process
Robots aren’t “programmed” by humans to mimic human decision-making; they learn from large datasets to perform tasks like “recognize a red traffic light” using complex mathematical formulas induced from data. This machine learning process requires much more data than humans need. However, once trained, robots would outperform humans in any given task and AI and robotics have dramatically improved their performance over the past five years through machine learning.