HOW IS ARTIFICIAL INTELLIGENCE PREDICTING THE PATIENTS’ RACE WITH MEDICAL IMAGES?

HOW IS ARTIFICIAL INTELLIGENCE PREDICTING THE PATIENTS’ RACE WITH MEDICAL IMAGES?

Even artificial intelligence, which was used to compose a play, depended on damaging preconceptions for casting

Miseducation of algorithms is a crucial issue; when artificial intelligence mimics the unconscious attitudes, bigotry, and preconceptions of the humans who created these algorithms, serious harm can result. Computer tools, for example, have incorrectly identified Black offenders as twice as common to re-offend as white defendants. When an artificial intelligence used pricing as a proxy for healthcare needs, it incorrectly identified Black patients as being healthier than equally ill white patients since less money has been spent on them. Even artificial intelligence, which was used to compose a play, depended on damaging preconceptions for casting. Removing sensitive information from the data appears to be a possible option. But what about when it’s insufficient?

There are several examples of bias in natural language processing, but MIT researchers have researched another vital, largely unexplored modality: medical imaging. Using both private and public records, the team discovered that AI models can effectively estimate patients’ self-reported race from medical photos alone. The scientists trained a deep learning model to classify the race as Black, White, or Asian using imaging data from chest X-rays, limb X-rays, breast CT scans, and mammograms although the pictures themselves had no explicit indication of the patient’s race. This is a feat that even the most experienced physicians cannot achieve, and it is unclear how the model accomplished it.

Read more