Until recently, it wasn’t possible to say that AI had a hand in forcing a government to resign. But that’s precisely what happened in the Netherlands in January 2021, when the incumbent cabinet resigned over the so-called kinderopvangtoeslagaffaire: the childcare benefits affair.
When a family in the Netherlands sought to claim their government childcare allowance, they needed to file a claim with the Dutch tax authority. Those claims passed through the gauntlet of a self-learning algorithm, initially deployed in 2013. In the tax authority’s workflow, the algorithm would first vet claims for signs of fraud, and humans would scrutinize those claims it flagged as high risk.
In reality, the algorithm developed a pattern of falsely labeling claims as fraudulent, and harried civil servants rubber-stamped the fraud labels. So, for years, the tax authority baselessly ordered thousands of families to pay back their claims, pushing many into onerous debt and destroying lives in the process.
“When there is disparate impact, there needs to be societal discussion around this, whether this is fair. We need to define what ‘fair’ is,” says Yong Suk Lee, a professor of technology, economy, and global affairs at the University of Notre Dame, in the United States. “But that process did not exist.”
Panorama des menaces cyber en 2025 : Implications pour les entreprises françaises à l'ère de…
Introduction L'adoption croissante des technologies d'intelligence artificielle dans le secteur de la santé offre des…
La révolution IA dans le secteur de la santé : nouveaux défis de cybersécurité La…
En tant que PME sous-traitante de grands groupes, vous connaissez trop bien ce scénario :…
Votre entreprise vient de subir une cyberattaque. Dans le feu de l'action, vous avez mobilisé…
"Mais concrètement, à quoi sert un scanner de vulnérabilité pour une entreprise comme la nôtre?"…
This website uses cookies.