It is an iron law of progress that any innovation that benefits society also has the potential for harm. We saw it with the train and the automobile. We can already see it with genetic engineering. And now we are seeing it with artificial intelligence.
Every day brings a new report of how artificial intelligence is opening up new opportunities to detect disease and eliminate hunger, to understand the nature of the universe or to combat climate change. Yet darker uses are also emerging, including deepfakes, disinformation and autonomous weapons systems capable of using lethal force without human intervention.
We find ourselves on the doorstep of the next great societal challenge: harnessing the benefits of artificial intelligence while also ensuring it is used ethically and responsibly. It is our responsibility to establish processes and policies now to determine whether AI will be helpful or harmful in the future, and how we will protect against illicit or dangerous use. The problem is manifold: How do we ensure the private sector develops this technology ethically? What do AI ethics even entail? How do we keep social biases from being embedded in and amplified by AI?
These are not rhetorical questions. They represent issues of generational concern that require both great debate and an enormous amount of collaboration between the public and private sectors. Business leaders, academics and public servants must trust one another to smartly and thoughtfully devise these solutions together. No one group will have the answer, and no singular entity will know what is in the best interest of society regarding a technology with potential that rivals — and perhaps exceeds — any yet developed in human history. We must establish these tools of trust.