Some AI Systems May Be Impossible to Compute

https://spectrum.ieee.org/deep-neural-network

New research suggests there are limitations to what deep neural networks can do

Deep neural networks are increasingly helping to design microchipspredict how proteins fold, and outperform people at complex games. However, researchers have now discovered there are fundamental theoretical limits to how stable and accurate these AI systems can actually get.

These findings might help shed light on what is and is not actually possible with AI, the scientists add.

In artificial neural networks, components dubbed “neurons” are fed data and cooperate to solve a problem, such as recognizing images. The neural net repeatedly adjusts the links between its neurons and sees if the resulting patterns of behavior are better at finding a solution. Over time, the network discovers which patterns are best at computing results. It then adopts these as defaults, mimicking the process of learning in the human brain. A neural network is dubbed “deep” if it possesses multiple layers of neurons.

Although deep neural networks are being used for increasingly practical applications such as analyzing medical scans and empowering autonomous vehicles, there is now overwhelming evidence that they can often prove unstable—that is, a slight alteration in the data they receive can lead to a wild change in outcomes. For example, previous research found that changing a single pixel on an image can make an AI think a horse is a frog, and medical images can get modified in a way that’s imperceptible to the human eye and causes an AI to misdiagnose cancer 100 percent of the time.

Read more