Miseducation of algorithms is a crucial issue; when artificial intelligence mimics the unconscious attitudes, bigotry, and preconceptions of the humans who created these algorithms, serious harm can result. Computer tools, for example, have incorrectly identified Black offenders as twice as common to re-offend as white defendants. When an artificial intelligence used pricing as a proxy for healthcare needs, it incorrectly identified Black patients as being healthier than equally ill white patients since less money has been spent on them. Even artificial intelligence, which was used to compose a play, depended on damaging preconceptions for casting. Removing sensitive information from the data appears to be a possible option. But what about when it’s insufficient?
There are several examples of bias in natural language processing, but MIT researchers have researched another vital, largely unexplored modality: medical imaging. Using both private and public records, the team discovered that AI models can effectively estimate patients’ self-reported race from medical photos alone. The scientists trained a deep learning model to classify the race as Black, White, or Asian using imaging data from chest X-rays, limb X-rays, breast CT scans, and mammograms although the pictures themselves had no explicit indication of the patient’s race. This is a feat that even the most experienced physicians cannot achieve, and it is unclear how the model accomplished it.
Introduction La cybersécurité est devenue une priorité stratégique pour toutes les entreprises, grandes ou petites.…
Cybersécurité : les établissements de santé renforcent leur défense grâce aux exercices de crise Face…
La transformation numérique du secteur financier n'a pas que du bon : elle augmente aussi…
L'IA : opportunité ou menace ? Les DSI de la finance s'interrogent Alors que l'intelligence…
Telegram envisage de quitter la France : le chiffrement de bout en bout au cœur…
Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le…
This website uses cookies.