When it comes to amending the Artificial Intelligence Act, practices endangering fundamental rights must be banned and high-risk applications should be strictly regulated, argues Marcel Kolaja
As the opinion rapporteur for the Artificial Intelligence Act in the Committee on Culture and Education (CULT), I will present a proposal for amending the Artificial Intelligence Act in March. The draft focuses on several key areas of artificial intelligence (AI), such as high-risk AI in education, high-risk AI requirements and obligations, AI and fundamental rights as well as prohibited practices and transparency obligations.
The regulation is aiming to create a legal framework that prevents discrimination and prohibits practices that violate fundamental rights or endanger our safety or health. One of the most problematic areas is the use of remote biometric identification systems in public space.
Unfortunately, the use of such systems has increased rapidly, especially by governments and companies to monitor places of gathering, for example. It is incredibly easy for law enforcement authorities to abuse these systems for mass surveillance of citizens. Therefore, the use of remote biometric identification and emotion recognition systems is over the line and must be banned completely.
Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières Le 17…
L’Agence nationale de la sécurité des systèmes d'information (ANSSI) a publié un rapport sur les…
Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…
Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…
This website uses cookies.