The journalist and academic says that the bias encoded in artificial intelligence systems can’t be fixed with better data alone – the change has to be societal
The message that bias can be embedded in our technological systems isn’t really new. Why do we need this book?
This book is about helping people understand the very real social harms that can be embedded in technology. We have had an explosion of wonderful journalism and scholarship about algorithmic bias and the harms that have been experienced by people. I try to lift up that reporting and thinking. I also want people to know that we have methods now for measuring bias in algorithmic systems. They are not entirely unknowable black boxes: algorithmic auditing exists and can be done.