Lessons From the World’s Two Experiments in AI Governance

ai governance
ai governance

Artificial intelligence (AI) is both omnipresent and conceptually slippery, making it notoriously hard to regulate. Fortunately for the rest of the world, two major experiments in the design of AI governance are currently playing out in Europe and China. The European Union (EU) is racing to pass its draft Artificial Intelligence Act, a sweeping piece of legislation intended to govern nearly all uses of AI. Meanwhile, China is rolling out a series of regulations targeting specific types of algorithms and AI capabilities. For the host of countries starting their own AI governance initiatives, learning from the successes and failures of these two initial efforts to govern AI will be crucial.

When policymakers sit down to develop a serious legislative response to AI, the first fundamental question they face is whether to take a more “horizontal” or “vertical” approach. In a horizontal approach, regulators create one comprehensive regulation that covers the many impacts AI can have. In a vertical strategy, policymakers take a bespoke approach, creating different regulations to target different applications or types of AI.

Neither the EU nor China is taking a purely horizontal or vertical approach to governing AI. But the EU’s AI Act leans horizontal and China’s algorithm regulations incline vertically. By digging into these two experiments in AI governance, policymakers can begin to draw out lessons for their own regulatory approaches.

THE EU’S APPROACH

The EU’s approach to AI governance centers on one central piece of legislation. At its core, the AI Act groups AI applications into four risk categories, each of which is governed by a predefined set of regulatory tools. Applications deemed to pose an “unacceptable risk” (such as social scoring and certain types of biometrics) are banned. “High risk” applications that pose a threat to safety or fundamental rights (think law enforcement or hiring procedures) are subject to certain pre- and post-market requirements. Applications seen as “limited risk” (emotion detection and chatbots, for instance) face only transparency requirements. The majority of AI uses are classified as “minimal risk” and subject only to voluntary measures.

Source