When things go wrong, flexible moral intuitions cause us to judge computers more severely
Before AI was hot, Henry Lieberman, a computer scientist, invited me to see his group’s work at MIT. Henry was obsessed with the idea that AI lacked common sense. So, together with his colleagues Catherine Havasi and Robyn Speer, he had been collecting commonsense statements in a Web site.
Commonsense statements are facts that are obvious to humans but are hard to grasp for machines. They are things such as “water is wet” or “love is a feeling.” They are also a sore spot for AI, since scholars are still working to understand why machines struggle with commonsense reasoning. On that day, Henry was eager to show me a chart in which words, such as love, water or feeling, were organized based on the data from their commonsense corpus. He showed me a plot using a technique called principal component analysis, a method to determine the axes that best explain variation in any type of numeric data.