“We need to know how the many subjective decisions that go into building a model lead to the observed results, and why those decisions were thought justified at the time, just to have a chance at disentangling everything when something goes wrong,” the paper reads. “Algorithmic impact assessments cannot solve all algorithmic harms, but they can put the field and regulators in better positions to avoid the harms in the first place and to act on them once we know more.”
A revamped version of the Algorithmic Accountability Act, first introduced in 2019, is now being discussed in Congress. According to a draft version of the legislation reviewed by WIRED, the bill would require businesses that use automated decision-making systems in areas such as health care, housing, employment, or education to carry out impact assessments and regularly report results to the FTC. A spokesperson for Senator Ron Wyden (D-Ore.), a cosponsor of the bill, says it calls on the FTC to create a public repository of automated decision-making systems and aims to establish an assessment process to enable future regulation by Congress or agencies like the FTC. The draft asks the FTC to decide what should be included in impact assessments and summary reports.
VPN : un outil indispensable pour protéger vos données Le VPN, ou « Virtual Private…
Cybersécurité et PME : les risques à ne pas sous-estimer On pense souvent que seules…
Comment reconnaître une attaque de phishing et s’en protéger Le phishing ou « hameçonnage »…
Qu’est-ce que la cybersécurité ? Définition, enjeux et bonnes pratiques en 2025 La cybersécurité est…
Cybersécurité : les établissements de santé renforcent leur défense grâce aux exercices de crise Face…
L'IA : opportunité ou menace ? Les DSI de la finance s'interrogent Alors que l'intelligence…
This website uses cookies.