Intelligence Artificielle

Brussels warns against ‘paranoia’ when regulating generative AI

Please use the sharing tools found via the share button at the top or side of articles. Copying articles to share with others is a breach of FT.com T&Cs and Copyright Policy. Email licensing@ft.com to buy additional rights. Subscribers may share up to 10 or 20 articles per month using the gift article service. More information can be found here.
https://www.ft.com/content/f167c499-2399-4b94-8a7b-7a5ccc1bb1cb

One of the EU’s most senior officials has warned against being “paranoid” or too restrictive when regulating generative artificial intelligence because fear of the technology would stifle innovation. Věra Jourová, the European Commission’s vice-president for values and transparency, told the Financial Times that the bloc’s impending legislation should not be based on “dystopian” concerns. “There should not be paranoia in assessing the risks of AI. It always has to be a solid analysis of the possible risks,” said Jourová, one of two commissioners overseeing the enactment of the EU’s landmark AI law. “We should not mark as high risk things which do not seem to be high risk at the moment. There should be a dynamic process where, when we see technologies being used in a risky way we are able to add them to the list of high risk later on.” When asked, she agreed that too much regulation posed a threat to technological and business innovation. The EU has been at the forefront of the race to regulate AI but others, including the US and China, are debating their own controls on its development and use. The UK is hosting a global summit on regulating AI next month. Jourová’s comments come as the commission, European parliament and member states start the final stretch of negotiations to finalise the AI act, two-and-a-half years after the commission proposed the legislation. Officials hope to conclude the discussions by the end of the year. The negotiations come after businesses raised concerns that the use of generative AI could lead to the manipulation of public opinion through the use of deep fakes. There are also worries among some members of the European parliament about the technology’s ability to create original content that could violate copyright laws.

Mots-clés : cybersécurité, sécurité informatique, protection des données, menaces cybernétiques, veille cyber, analyse de vulnérabilités, sécurité des réseaux, cyberattaques, conformité RGPD, NIS2, DORA, PCIDSS, DEVSECOPS, eSANTE, intelligence artificielle, IA en cybersécurité, apprentissage automatique, deep learning, algorithmes de sécurité, détection des anomalies, systèmes intelligents, automatisation de la sécurité, IA pour la prévention des cyberattaques.

Veille-cyber

Recent Posts

Bots et IA biaisées : menaces pour la cybersécurité

Bots et IA biaisées : une menace silencieuse pour la cybersécurité des entreprises Introduction Les…

7 jours ago

Cloudflare en Panne

Cloudflare en Panne : Causes Officielles, Impacts et Risques pour les Entreprises  Le 5 décembre…

7 jours ago

Alerte sur le Malware Brickstorm : Une Menace pour les Infrastructures Critiques

Introduction La cybersécurité est aujourd’hui une priorité mondiale. Récemment, la CISA (Cybersecurity and Infrastructure Security…

7 jours ago

Cloud Computing : État de la menace et stratégies de protection

  La transformation numérique face aux nouvelles menaces Le cloud computing s’impose aujourd’hui comme un…

1 semaine ago

Attaque DDoS record : Cloudflare face au botnet Aisuru – Une analyse de l’évolution des cybermenaces

Les attaques par déni de service distribué (DDoS) continuent d'évoluer en sophistication et en ampleur,…

1 semaine ago

Poèmes Pirates : La Nouvelle Arme Contre Votre IA

Face à l'adoption croissante des technologies d'IA dans les PME, une nouvelle menace cybersécuritaire émerge…

1 semaine ago

This website uses cookies.