Do scientists need an AI Hippocratic oath?

Do scientists need an AI Hippocratic oath?

When a lifelike, Hanson Robotics robot named Sophia[1] was asked whether she would destroy humans, it replied, “Okay, I will destroy humans.” Philip K Dick, another humanoid robot, has promised to keep humans “warm and safe in my people zoo.” And Bina48, another lifelike robot, has expressed that it wants “to take over all the nukes.”

All of these robots were powered by artificial intelligence (AI)—algorithms that learn from data, make decisions, and perform tasks without human input or even, in some cases, human understanding. And while none of these AIs have followed through with their nefarious plots, some scientists, including the (late) physicist Stephen Hawking, have warned that super-intelligent, AI-powered computers could harbor and achieve goals that conflict with human life.

“You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project, and there’s an anthill in the region to be flooded

Read more