How to address inequity in healthcare AI? Hire a diverse data team

How to address inequity in healthcare AI? Hire a diverse data team

In a preview of her HIMSS22 session, Dr. Tania M. Martin-Mercado stresses the importance of acknowledging widespread bias without defensiveness.

Even as artificial intelligence has become more thoroughly integrated into healthcare, experts have cautioned about its possible downsides – including the potentially hazardous risk of bias and discrimination encoded into algorithms.

In fact, said Dr. Tania M. Martin-Mercado, digital advisor in healthcare and life sciences at Microsoft, the implication that automation will miraculously improve care delivery is one that hasn’t been fully explored.

« This is an over-reaching generalization, » said Martin-Mercado, who will be presenting on the subject at HIMSS22 in March.

Further, she continued, the assumption « does not take into consideration the multitude of factors that contribute to the very bias and inequity that is sought after by implementing AI in healthcare. »

Algorithms have the potential to increase inequities, she explained, when they don’t take into account considerations such as socioeconomic status, gender, disabilities, sexual orientation and other factors that contribute to disparities in outcomes.

Read more

Building Privacy Into AI: Is the Future Federated?