A critical review of the EU’s ‘Ethics Guidelines for Trustworthy AI’

TrustworthyAI
TrustworthyAI

Europe has some of the most progressive, human-centric artificial intelligence governance policies in the world. Compared to the heavy-handed government oversight in China or the Wild West-style anything goes approach in the US, the EU’s strategy is designed to stoke academic and corporate innovation while also protecting private citizens from harm and overreach. But that doesn’t mean it’s perfect.

The 2018 initiative

In 2018, the European Commission began its European AI Alliance initiative. The alliance exists so that various stakeholders can weigh-in and be heard as the EU considers its ongoing policies governing the development and deployment of AI technologies.

Since 2018, more than 6,000 stakeholders have participated in the dialogue through various venues, including online forums and in-person events.

The commentary, concerns, and advice provided by those stakeholders has been considered by the EU’s “High-level expert group on artificial intelligence,” who ultimately created four key documents that work as the basis for the EU’s policy discussions on AI:

1. Ethics Guidelines for Trustworthy AI

2. Policy and Investment Recommendations for Trustworthy AI

3. Assessment List for Trustworthy AI

4. Sectoral Considerations on the Policy and Investment Recommendations