A new technique compares the reasoning of a machine-learning model to that of a human, so the user can see patterns in the model’s behavior.
In machine learning, understanding why a model makes certain decisions is often just as important as whether those decisions are correct. For instance, a machine-learning model might correctly predict that a skin lesion is cancerous, but it could have done so using an unrelated blip on a clinical photo.
While tools exist to help experts make sense of a model’s reasoning, often these methods only provide insights on one decision at a time, and each must be manually evaluated. Models are commonly trained using millions of data inputs, making it almost impossible for a human to evaluate enough decisions to identify patterns.
Now, researchers at MIT and IBM Research have created a method that enables a user to aggregate, sort, and rank these individual explanations to rapidly analyze a machine-learning model’s behavior. Their technique, called Shared Interest, incorporates quantifiable metrics that compare how well a model’s reasoning matches that of a human.
Shared Interest could help a user easily unc
Sécurité des mots de passe : bonnes pratiques pour éviter les failles La sécurité des…
Ransomware : comment prévenir et réagir face à une attaque Le ransomware est l’une des…
Cybersécurité et e-commerce : protéger vos clients et vos ventes En 2025, les sites e-commerce…
Les ransomwares : comprendre et se défendre contre cette menace En 2025, les ransomwares représentent…
RGPD et cybersécurité : comment rester conforme en 2025 Depuis sa mise en application en…
VPN : un outil indispensable pour protéger vos données Le VPN, ou « Virtual Private…
This website uses cookies.