Many agree on what responsible, ethical AI looks like — at least at a zoomed-out level. But outlining key goals, like privacy and fairness, is only the first step. The next? Turning ideals into action.
Policymakers need to determine whether existing laws and voluntary guidance are powerful enough tools to enforce good behavior, or if new regulations and authorities are necessary.
And organizations will need to plan for how they can shift their culture and practices to ensure they’re following responsible AI advice. That could be important for compliance purposes or simply for preserving customer trust.
Public institutions and organizations in Asia, Europe and North America tend to agree that “responsible” AI supports accountability, explainability, fairness, human oversight, privacy, robustness and security, according to IAPP’s recent Privacy and AI Governance report, which interviewed entities in the regions.
Now developers, procurement officials and others may need more specific, fine-grained guidance on what tools and benchmarks to look to for helping achieve these goals.
Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières Le 17…
L’Agence nationale de la sécurité des systèmes d'information (ANSSI) a publié un rapport sur les…
Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…
Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…
This website uses cookies.