A call to bring more human-centered design to artificial intelligence

A call to bring more human-centered design to artificial intelligence

Artificial intelligence shouldn’t have to be activated through a « big red button » that delivers opaque results that everyone hopes is the final word on a given question. Rather, it should be under some degree of control by humans in the loop who can get a sense of what the results are telling them.

rainbow-hand-cropped-photo-by-joe-mckendrick.jpg
Photo: Joe McKendrick

That’s the word from Ge Wang, associate professor at Stanford, who urges a human-centered design approach to AI applications and systems. In a recent webcast hosted by Stanford HAI (Human-Centered AI), Wang urges AI developers and designers to step back and consider the important role of humans in the loop. « We’re so far away from answers in AI, we don’t yet know most of the questions, » he points out.

Many of today’s AI systems are designed around the big red button, he says. « Whatever we want from AI, we’re going to press the button, and as if by magic, it’s going to deliver the result to us, » says Wang. The perception is that « AI has this magical quality, in the sense that it exhibits this, for lack of a better word, intelligence. And it’s able to do complex tasks. AI is the most powerful pattern-recognizer that we’ve ever built. »

The question becomes, then,  » what do we really want from AI? » Wang continues. « Do we want oracles all the time, that just give us the right answers without showing its work necessarily? Or, do we want tools? Things that we can use to learn to get better at? And tools that are, by definition, interactive to the human? »

Read more