Dumb AI is a bigger risk than strong AI

dumb ai
dumb ai

The year is 2052. The world has averted the climate crisis thanks to finally adopting nuclear power for the majority of power generation. Conventional wisdom is now that nuclear power plants are a problem of complexity; Three Mile Island is now a punchline rather than a disaster. Fears around nuclear waste and plant blowups have been alleviated primarily through better software automation. What we didn’t know is that the software for all nuclear power plants, made by a few different vendors around the world, all share the same bias. After two decades of flawless operation, several unrelated plants all fail in the same year. The council of nuclear power CEOs has realized that everyone who knows how to operate Class IV nuclear power plants is either dead or retired. We now have to choose between modernity and unacceptable risk.

Artificial Intelligence, or AI, is having a moment. After a multi-decade “AI winter,” machine learning has awakened from its slumber to find a world of technical advances like reinforcement learning, transformers and more with computational resources that are now fully baked and can make use of these advances.

AI’s ascendance has not gone unnoticed; in fact, it has spurred much debate. The conversation is often dominated by those who are afraid of AI. These people range from ethical AI researchers afraid of bias to rationalists contemplating extinction events. Their concerns tend to revolve around AI that is hard to understand or too intelligent to control, ultimately end-running the goals of us, its creators. Usually, AI boosters will respond with a techno-optimist tack. They argue that these worrywarts are wholesale wrong, pointing to their own abstract arguments as well as hard data regarding the good work that AI has done for us so far to imply that it will continue to do good for us in the future.

Read more