Over the past few weeks, there have been a number of significant developments in the global discussion on AI risk and regulation. The emergent theme, both from the U.S. hearings on OpenAI with Sam Altman and the EU’s announcement of the amended AI Act, has been a call for more regulation.
But what’s been surprising to some is the consensus between governments, researchers and AI developers on this need for regulation. In the testimony before Congress, Sam Altman, the CEO of OpenAI, proposed creating a new government body that issues licenses for developing large-scale AI models.
He gave several suggestions for how such a body could regulate the industry, including “a combination of licensing and testing requirements,” and said firms like OpenAI should be independently audited.
However, while there is growing agreement on the risks, including potential impacts on people’s jobs and privacy, there is still little consensus on what such regulations should look like or what potential audits should focus on. At the first Generative AI Summit held by the World Economic Forum, where AI leaders from businesses, governments and research institutions gathered to drive alignment on how to navigate these new ethical and regulatory considerations, two key themes emerged: