The movement to hold AI accountable gains more steam

‘It’s a WhatsApp life’: how the messaging app became a critical financial service

“We need to know how the many subjective decisions that go into building a model lead to the observed results, and why those decisions were thought justified at the time, just to have a chance at disentangling everything when something goes wrong,” the paper reads. “Algorithmic impact assessments cannot solve all algorithmic harms, but they can put the field and regulators in better positions to avoid the harms in the first place and to act on them once we know more.”

A revamped version of the Algorithmic Accountability Act, first introduced in 2019, is now being discussed in Congress. According to a draft version of the legislation reviewed by WIRED, the bill would require businesses that use automated decision-making systems in areas such as health care, housing, employment, or education to carry out impact assessments and regularly report results to the FTC. A spokesperson for Senator Ron Wyden (D-Ore.), a cosponsor of the bill, says it calls on the FTC to create a public repository of automated decision-making systems and aims to establish an assessment process to enable future regulation by Congress or agencies like the FTC. The draft asks the FTC to decide what should be included in impact assessments and summary reports.