ai business
Over the past few weeks, there have been a number of significant developments in the global discussion on AI risk and regulation. The emergent theme, both from the U.S. hearings on OpenAI with Sam Altman and the EU’s announcement of the amended AI Act, has been a call for more regulation.
But what’s been surprising to some is the consensus between governments, researchers and AI developers on this need for regulation. In the testimony before Congress, Sam Altman, the CEO of OpenAI, proposed creating a new government body that issues licenses for developing large-scale AI models.
He gave several suggestions for how such a body could regulate the industry, including “a combination of licensing and testing requirements,” and said firms like OpenAI should be independently audited.
However, while there is growing agreement on the risks, including potential impacts on people’s jobs and privacy, there is still little consensus on what such regulations should look like or what potential audits should focus on. At the first Generative AI Summit held by the World Economic Forum, where AI leaders from businesses, governments and research institutions gathered to drive alignment on how to navigate these new ethical and regulatory considerations, two key themes emerged:
L'IA : opportunité ou menace ? Les DSI de la finance s'interrogent Alors que l'intelligence…
Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le…
La transformation numérique du secteur financier n'a pas que du bon : elle augmente aussi…
Telegram envisage de quitter la France : le chiffrement de bout en bout au cœur…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
TISAX® et ISO 27001 sont toutes deux des normes dédiées à la sécurité de l’information. Bien qu’elles aient…
This website uses cookies.