Using artificial intelligence (AI) for warfare has been the promise of science fiction and politicians for years, but new research from the Georgia Institute of Technology argues only so much can be automated and shows the value of human judgment.
All of the hard problems in AI really are judgment and data problems, and the interesting thing about that is when you start thinking about war, the hard problems are strategy and uncertainty, or what is well known as the fog of war, » said Jon Lindsay, an associate professor in the School of Cybersecurity & Privacy and the Sam Nunn School of International Affairs. « You need human sense-making and to make moral, ethical, and intellectual decisions in an incredibly confusing, fraught, scary situation. »
AI decision-making is based on four key components: data about a situation, interpretation of those data (or prediction), determining the best way to act in line with goals and values (or judgment), and action. Machine learning advancements have made predictions easier, which makes data and judgment even more valuable. Although AI can automate everything from commerce to transit, judgment is where humans must intervene, Lindsay and University of Toronto Professor Avi Goldfarb wrote in the paper, « Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War, » published in International Security.