Intelligence Artificielle

Why the EU’s Artificial Intelligence Act could harm innovation

The EU’s proposed Artificial Intelligence Act plans to restrict open-source AI. But that will come at a cost for advancement and innovation, argues Nitish Mutha of Genie AI

The proposed – and still debated – Artificial Intelligence Act (AIA) from the EU touches upon the regulation of open-source AI. But enforcing strict restrictions on the sharing and distribution of open-source general-purpose AI (GPAI) is a completely retrograde step. It is like rewinding the world back by 30 years.

Open-source culture is the only reason why mankind was able to progress technology at such a light speed. Only recently AI researchers were able to embrace sharing their code for more transparency and verification but putting constraints on this movement will damage the cultural progress the scientific community has made.

It takes a lot of energy and effort to cause a cultural shift in the community – so it will be sad and demoralising to shunt this. The whole Artificial Intelligence Act needs to be considered very carefully, and its proposed changes have sent ripples through the open source AI and technology community.

The ‘chilling effect’ reaction

Counteractive objectives

Two objectives from the act’s proposed regulatory framework stand out in particular:

  • ensure legal certainty to facilitate investment and innovation in AI’ and
  • facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation’

Introducing regulations on GPAI seems to counteract these statements. GPAI thrives on innovation and knowledge sharing without fear of damaging legal repercussions and costs. So, rather than create a safe market withstanding fragmentation, what could actually happen is a range of stringent legal regulations that both inhibit open-source development and further monopolise AI’s development with the large tech companies.

Mots-clés : cybersécurité, sécurité informatique, protection des données, menaces cybernétiques, veille cyber, analyse de vulnérabilités, sécurité des réseaux, cyberattaques, conformité RGPD, NIS2, DORA, PCIDSS, DEVSECOPS, eSANTE, intelligence artificielle, IA en cybersécurité, apprentissage automatique, deep learning, algorithmes de sécurité, détection des anomalies, systèmes intelligents, automatisation de la sécurité, IA pour la prévention des cyberattaques.

Veille-cyber

Recent Posts

Bots et IA biaisées : menaces pour la cybersécurité

Bots et IA biaisées : une menace silencieuse pour la cybersécurité des entreprises Introduction Les…

1 semaine ago

Cloudflare en Panne

Cloudflare en Panne : Causes Officielles, Impacts et Risques pour les Entreprises  Le 5 décembre…

1 semaine ago

Alerte sur le Malware Brickstorm : Une Menace pour les Infrastructures Critiques

Introduction La cybersécurité est aujourd’hui une priorité mondiale. Récemment, la CISA (Cybersecurity and Infrastructure Security…

1 semaine ago

Cloud Computing : État de la menace et stratégies de protection

  La transformation numérique face aux nouvelles menaces Le cloud computing s’impose aujourd’hui comme un…

1 semaine ago

Attaque DDoS record : Cloudflare face au botnet Aisuru – Une analyse de l’évolution des cybermenaces

Les attaques par déni de service distribué (DDoS) continuent d'évoluer en sophistication et en ampleur,…

1 semaine ago

Poèmes Pirates : La Nouvelle Arme Contre Votre IA

Face à l'adoption croissante des technologies d'IA dans les PME, une nouvelle menace cybersécuritaire émerge…

1 semaine ago

This website uses cookies.