Intelligence Artificielle

Regulating AI foundation models is crucial for innovation

It would be irresponsible for the EU to cast aside regulation of European foundation model developers. To support its SMEs and ensure AI works for people and society, the EU must create rules for these companies in the AI Act, writes Connor Dunlop.

Connor Dunlop is the EU Public Policy Lead at the Ada Lovelace Institute.

The European Union has a long history of regulating technologies that pose serious risks to public safety and health. Whether it’s automobiles, planes, food safety, medical devices or drugs, the EU has established product safety laws that create clear rules for companies to follow.

These rules keep people safe, protect their fundamental rights, and ensure the public trusts these technologies enough to use them. Without regulation, essential public and commercial services are more likely to malfunction or be misused, potentially causing considerable harm to people and society.

AI technologies, which are becoming increasingly integrated into our daily lives, are no exception to this.

This is the lens through which to view the current debate in the EU over the AI Act, which seeks to establish harmonised product safety rules for AI. This includes foundation models, which pose significant risks given their potential to form AI infrastructure that downstream SMEs build from.

That is why EU legislators have proposed guardrails for foundation model providers, including independent auditing, safety and cybersecurity testing, risk assessments and mitigation.

Given the range and severity of risks that foundation models raise, these proposals are reasonable steps for ensuring public safety and trust – and for ensuring that the SMEs using these products can be confident they are safe.

But last week, France, Germany and Italy rejected these requirements and proposed that foundation models should be exempt from any regulatory obligations.

This position has now raised the prospect of indefinitely delaying the entire EU AI Act – which covers all kinds of AI systems, from biometrics technologies to systems that impact our electoral processes.

Source

Veille-cyber

Share
Published by
Veille-cyber

Recent Posts

Les 7 menaces cyber les plus fréquentes en entreprise

Introduction La cybersécurité est devenue une priorité stratégique pour toutes les entreprises, grandes ou petites.…

2 jours ago

Cybersécurité : Vers une montée en compétence des établissements de santé grâce aux exercices de crise

Cybersécurité : les établissements de santé renforcent leur défense grâce aux exercices de crise Face…

1 semaine ago

Règlement DORA : implications contractuelles pour les entités financières et les prestataires informatiques

La transformation numérique du secteur financier n'a pas que du bon : elle augmente aussi…

1 semaine ago

L’IA : opportunité ou menace ? Les DSI de la finance s’interrogent

L'IA : opportunité ou menace ? Les DSI de la finance s'interrogent Alors que l'intelligence…

2 semaines ago

Telegram menace de quitter la France : le chiffrement de bout en bout en ligne de mire

Telegram envisage de quitter la France : le chiffrement de bout en bout au cœur…

2 semaines ago

Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le secteur financier

Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le…

2 semaines ago

This website uses cookies.