When it comes to amending the Artificial Intelligence Act, practices endangering fundamental rights must be banned and high-risk applications should be strictly regulated, argues Marcel Kolaja
As the opinion rapporteur for the Artificial Intelligence Act in the Committee on Culture and Education (CULT), I will present a proposal for amending the Artificial Intelligence Act in March. The draft focuses on several key areas of artificial intelligence (AI), such as high-risk AI in education, high-risk AI requirements and obligations, AI and fundamental rights as well as prohibited practices and transparency obligations.
The regulation is aiming to create a legal framework that prevents discrimination and prohibits practices that violate fundamental rights or endanger our safety or health. One of the most problematic areas is the use of remote biometric identification systems in public space.
Unfortunately, the use of such systems has increased rapidly, especially by governments and companies to monitor places of gathering, for example. It is incredibly easy for law enforcement authorities to abuse these systems for mass surveillance of citizens. Therefore, the use of remote biometric identification and emotion recognition systems is over the line and must be banned completely.
Comment reconnaître une attaque de phishing et s’en protéger Le phishing ou « hameçonnage »…
Qu’est-ce que la cybersécurité ? Définition, enjeux et bonnes pratiques en 2025 La cybersécurité est…
Cybersécurité : les établissements de santé renforcent leur défense grâce aux exercices de crise Face…
L'IA : opportunité ou menace ? Les DSI de la finance s'interrogent Alors que l'intelligence…
Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le…
La transformation numérique du secteur financier n'a pas que du bon : elle augmente aussi…
This website uses cookies.