Analyzing images in the blink of an eye

Analyzing images in the blink of an eye

Research in the field of machine learning and AI, now a key technology in practically every industry and company, is far too voluminous for anyone to read it all. This column, Perceptron (previously Deep Science), aims to collect some of the most relevant recent discoveries and papers — particularly in, but not limited to, artificial intelligence — and explain why they matter.

This week in AI, engineers at Penn State announced that they’ve created a chip that can process and classify nearly two billion images per second. Carnegie Mellon, meanwhile, has signed a $10.5 million U.S. Army contract to expand its use of AI in predictive maintenance. And at UC Berkeley, a team of scientists is applying AI research to solve climate problems, like understanding snow as a water resource.

The Penn State work aimed to overcome the limitations of traditional processors when applied to AI workloads — specifically recognizing and classifying images or the objects in them. Before a machine learning system can process an image, it must be captured by a camera’s image sensor (assuming it’s a real-world image), converted by the sensor from light to electrical signals, and then converted again into binary data. Only then can the system sufficiently “understand” the image to process, analyze and classify it.

Read more