Many agree on what responsible, ethical AI looks like — at least at a zoomed-out level. But outlining key goals, like privacy and fairness, is only the first step. The next? Turning ideals into action.
Policymakers need to determine whether existing laws and voluntary guidance are powerful enough tools to enforce good behavior, or if new regulations and authorities are necessary.
And organizations will need to plan for how they can shift their culture and practices to ensure they’re following responsible AI advice. That could be important for compliance purposes or simply for preserving customer trust.
Public institutions and organizations in Asia, Europe and North America tend to agree that “responsible” AI supports accountability, explainability, fairness, human oversight, privacy, robustness and security, according to IAPP’s recent Privacy and AI Governance report, which interviewed entities in the regions.
Now developers, procurement officials and others may need more specific, fine-grained guidance on what tools and benchmarks to look to for helping achieve these goals.
Cybersécurité et PME : les risques à ne pas sous-estimer On pense souvent que seules…
Comment reconnaître une attaque de phishing et s’en protéger Le phishing ou « hameçonnage »…
Qu’est-ce que la cybersécurité ? Définition, enjeux et bonnes pratiques en 2025 La cybersécurité est…
Cybersécurité : les établissements de santé renforcent leur défense grâce aux exercices de crise Face…
L'IA : opportunité ou menace ? Les DSI de la finance s'interrogent Alors que l'intelligence…
Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le…
This website uses cookies.