
When it comes to amending the Artificial Intelligence Act, practices endangering fundamental rights must be banned and high-risk applications should be strictly regulated, argues Marcel Kolaja
As the opinion rapporteur for the Artificial Intelligence Act in the Committee on Culture and Education (CULT), I will present a proposal for amending the Artificial Intelligence Act in March. The draft focuses on several key areas of artificial intelligence (AI), such as high-risk AI in education, high-risk AI requirements and obligations, AI and fundamental rights as well as prohibited practices and transparency obligations.
The regulation is aiming to create a legal framework that prevents discrimination and prohibits practices that violate fundamental rights or endanger our safety or health. One of the most problematic areas is the use of remote biometric identification systems in public space.
Unfortunately, the use of such systems has increased rapidly, especially by governments and companies to monitor places of gathering, for example. It is incredibly easy for law enforcement authorities to abuse these systems for mass surveillance of citizens. Therefore, the use of remote biometric identification and emotion recognition systems is over the line and must be banned completely.