Why Effective Altruists Fear the AI Apocalypse

ai apocalypse
ai apocalypse

Humanity is a wayward teenager. Our species has its whole life ahead of it, but the decisions we make now will irrevocably shape the course of our adulthood. We could recognize the stakes of this critical moment, buckle down, do our homework, drink responsibly, eat sustainably, prepare for pandemics, avert robot apocalypses, realize our full potential, and live a long, prosperous, meaningful life before dying peacefully in a supernova at the ripe old age of 1 trillion. Or we could party all the time, get into fights, start a nuclear war, create doomsday bioweapons, tremble before our new robot overlords, live fast, die young, and leave an irradiated corpse. We owe it to our future selves — which is to say, to the hundreds of billions of potential future humans — to choose wisely.

So argues the philosopher William MacAskill in his new book, What We Owe the Future. MacAskill is a professor at Oxford and leader of the “effective altruism” movement. In recent years, his concern for maximizing his positive impact on the world has led him to champion “longtermism,” a philosophy that insists on the moral worth of future people and, thus, our moral obligation to protect their interests. Longtermists argue that humanity should be investing far more resources into mitigating the risk of future catastrophes in general and extinction events in particular. They are especially concerned with the possibility that humanity will one day develop an artificial general intelligence, or AGI, that could abet a global totalitarian dictatorship or decide to treat humanity like obsolete software — and delete us from the planet.

Read more