As the world becomes increasingly dependent on technology to communicate, attend school, do our work, buy groceries and more, artificial intelligence (AI) and machine learning (ML) play a bigger role in our lives. Living through the second year of the COVID-19 pandemic has shown the value of technology and AI. It has also revealed a dangerous side and regulators have responded accordingly.
In 2021, across the world, governing bodies have been working to regulate how AI and ML systems are used. From the UK to the EU to China, regulations on how industries should monitor their algorithms, best practices for auditing and frameworks for more transparent AI systems are on the rise. In the U.S. there has been much less progress made on regulating artificial intelligence than in other geographies. Yet, over the past year, the federal government has begun to take steps toward regulating artificial intelligence across industries.
The threat to civil rights, civil liberties and privacy is one of the biggest considerations made in regulating AI in the U.S. The debates concerning how AI should be handled this year have focused on three areas of interest: Europe and the UK, the individual U.S. states, and the U.S. federal authorities.
Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières Le 17…
L’Agence nationale de la sécurité des systèmes d'information (ANSSI) a publié un rapport sur les…
Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…
Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…
This website uses cookies.