Over the past few years, many organizations have been racing to build, launch and scale their artificial intelligence (AI) and machine learning (ML) models. However, the lack of guardrails and governance means that black-box ML models are released into the real world, where they have significant and often unintended impacts on the business and the public. This is where the concept and practice of Responsible AI comes into play.
Responsible AI (RAI) combines methodology and tools that must be in place for society to adopt AI and enjoy its game-changing benefits while minimizing unintended consequences. Over the last year, Responsible AI has been discussed at length by the AI community, with many large organizations — such as Microsoft and Google — creating their own RAI guidelines and governments proposing legislation. While it’s important to have this conversation, what tends to get lost is putting these concepts into practice.
Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières Le 17…
L’Agence nationale de la sécurité des systèmes d'information (ANSSI) a publié un rapport sur les…
Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…
Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…
This website uses cookies.