Why the EU’s Artificial Intelligence Act could harm innovation

ai act
ai act

The EU’s proposed Artificial Intelligence Act plans to restrict open-source AI. But that will come at a cost for advancement and innovation, argues Nitish Mutha of Genie AI

The proposed – and still debated – Artificial Intelligence Act (AIA) from the EU touches upon the regulation of open-source AI. But enforcing strict restrictions on the sharing and distribution of open-source general-purpose AI (GPAI) is a completely retrograde step. It is like rewinding the world back by 30 years.

Open-source culture is the only reason why mankind was able to progress technology at such a light speed. Only recently AI researchers were able to embrace sharing their code for more transparency and verification but putting constraints on this movement will damage the cultural progress the scientific community has made.

It takes a lot of energy and effort to cause a cultural shift in the community – so it will be sad and demoralising to shunt this. The whole Artificial Intelligence Act needs to be considered very carefully, and its proposed changes have sent ripples through the open source AI and technology community.

The ‘chilling effect’ reaction

Counteractive objectives

Two objectives from the act’s proposed regulatory framework stand out in particular:

  • ensure legal certainty to facilitate investment and innovation in AI’ and
  • facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation’

Introducing regulations on GPAI seems to counteract these statements. GPAI thrives on innovation and knowledge sharing without fear of damaging legal repercussions and costs. So, rather than create a safe market withstanding fragmentation, what could actually happen is a range of stringent legal regulations that both inhibit open-source development and further monopolise AI’s development with the large tech companies.