AI has a dangerous bias problem

AI has a dangerous bias problem

AI now guides numerous life-changing decisions, from assessing loan applications to determining prison sentences.

Proponents of the approach argue that it can eliminate human prejudices, but critics warn that algorithms can amplify our biases — without even revealing how they reached the decision.

This can result in AI systems leading to Black people being wrongfully arrested, or child services unfairly targeting poor families. The victims are frequently from groups that are already marginalized.

Alejandro Saucedo, Chief Scientist at The Institute for Ethical AI and Engineering Director at ML startup Seldon, warns organizations to think carefully before deploying algorithms. He told TNW his tips on mitigating the risks.


Machine learning systems need to provide transparency. This can be a challenge when using powerful AI models, whose inputs, operations, and outcomes aren’t obvious to humans.

Explainability has been touted as a solution for years, but effective approaches remain elusive.

Read more