Explanation methods that help users determine whether to trust machine-learning model predictions can be less accurate for disadvantaged subgroups, a new study finds.
When the stakes are high, machine-learning models are sometimes used to aid human decision-makers. For instance, a model could predict which law school applicants are most likely to pass the bar exam to help an admissions officer determine which students should be accepted.
These models often have millions of parameters, so how they make predictions is nearly impossible for researchers to fully understand, let alone an admissions officer with no machine-learning experience. Researchers sometimes employ explanation methods that mimic a larger model by creating simple approximations of its predictions. These approximations, which are far easier to understand, help users determine whether to trust the model’s predictions.
But are these explanation methods fair? If an explanation method provides better approximations for men than for women, or for white people than for Black people, it may encourage users to trust the model’s predictions for some people but not for others.
Panorama des menaces cyber en 2025 : Implications pour les entreprises françaises à l'ère de…
Introduction L'adoption croissante des technologies d'intelligence artificielle dans le secteur de la santé offre des…
La révolution IA dans le secteur de la santé : nouveaux défis de cybersécurité La…
En tant que PME sous-traitante de grands groupes, vous connaissez trop bien ce scénario :…
Votre entreprise vient de subir une cyberattaque. Dans le feu de l'action, vous avez mobilisé…
"Mais concrètement, à quoi sert un scanner de vulnérabilité pour une entreprise comme la nôtre?"…
This website uses cookies.