ai governance
Artificial intelligence (AI) is both omnipresent and conceptually slippery, making it notoriously hard to regulate. Fortunately for the rest of the world, two major experiments in the design of AI governance are currently playing out in Europe and China. The European Union (EU) is racing to pass its draft Artificial Intelligence Act, a sweeping piece of legislation intended to govern nearly all uses of AI. Meanwhile, China is rolling out a series of regulations targeting specific types of algorithms and AI capabilities. For the host of countries starting their own AI governance initiatives, learning from the successes and failures of these two initial efforts to govern AI will be crucial.
When policymakers sit down to develop a serious legislative response to AI, the first fundamental question they face is whether to take a more “horizontal” or “vertical” approach. In a horizontal approach, regulators create one comprehensive regulation that covers the many impacts AI can have. In a vertical strategy, policymakers take a bespoke approach, creating different regulations to target different applications or types of AI.
Neither the EU nor China is taking a purely horizontal or vertical approach to governing AI. But the EU’s AI Act leans horizontal and China’s algorithm regulations incline vertically. By digging into these two experiments in AI governance, policymakers can begin to draw out lessons for their own regulatory approaches.
The EU’s approach to AI governance centers on one central piece of legislation. At its core, the AI Act groups AI applications into four risk categories, each of which is governed by a predefined set of regulatory tools. Applications deemed to pose an “unacceptable risk” (such as social scoring and certain types of biometrics) are banned. “High risk” applications that pose a threat to safety or fundamental rights (think law enforcement or hiring procedures) are subject to certain pre- and post-market requirements. Applications seen as “limited risk” (emotion detection and chatbots, for instance) face only transparency requirements. The majority of AI uses are classified as “minimal risk” and subject only to voluntary measures.
Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières Le 17…
L’Agence nationale de la sécurité des systèmes d'information (ANSSI) a publié un rapport sur les…
Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…
Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…
This website uses cookies.