This new dataset shows that AI still lacks commonsense reasoning

This new dataset shows that AI still lacks commonsense reasoning

Abductive reasoning, frequently misidentified as deductive reasoning, is the process of making a plausible prediction when faced with incomplete information. For example, given a photo showing a toppled truck and a police cruiser on a snowy freeway, abductive reasoning may lead someone to infer that dangerous road conditions caused an accident.

Humans can quickly consider this sort of context to arrive at a hypothesis. But AI struggles, despite recent technical advances. Motivated to explore the challenge, researchers at the Allen Institute for Artificial Intelligence, the University of California, Berkeley, and the MIT-IBM Watson AI lab created a dataset called Sherlock, a collection of over 100,000 images of scenes paired with clues a viewer could use to answer questions about the scenes. As project contributor Jack Hessel, an Allen Institute research scientist, explains, Sherlock is designed to test the ability of AI systems to abductively reason by having the systems observe text and visual clues.

“With Sherlock, we aimed to study visual abductive reasoning: i.e., probable and salient conclusions that go beyond what’s literally depicted in an image. We named the dataset after Sherlock Holmes, who iconically embodies abduction,” Hessel told VentureBeat via email. “This type of commonsense reasoning is an important part of human cognition; Jerry Hobbs, distinguished computational linguist and ACL 2013 Lifetime Achievement Award winner, perhaps put it best in his acceptance speech: ‘The brain is an abduction machine, continuously trying to prove abductively that the observables in its environment constitute a coherent situation.’”

Read more