AI algorithms can still come loaded with racial bias, even if they’re trained on data more representative of different ethnic groups, according to new research.
An international team of researchers analyzed how accurate algorithms were at predicting various cognitive behaviors and health measurements from brain fMRI scans, such as memory, mood, and even grip strength. Medical datasets are often skewed – they’re not collected from a diverse enough sample size, and certain groups of the population are left out or misrepresented.
It’s not surprising if predictive models that try to detect skin cancer, for example, aren’t as effective when analyzing darker skin tones than lighter ones. Biased datasets are often the source for why AI models are also biased. But a paper published in Science Advances has found that these unwanted behaviors in algorithms can persist even if they’re trained on datasets that are more fair and diverse.
The team performed a series of experiments with two datasets containing tens of thousands of fMRI scans of people’s brains – including data from the Human Connectome Project and the Adolescent Brain Cognitive Development. In order to probe how racial disparities impacted the predictive models’ performance, they tried to minimize the impact other variables, such as age or gender, might have on accuracy.
Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières Le 17…
L’Agence nationale de la sécurité des systèmes d'information (ANSSI) a publié un rapport sur les…
Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…
Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…
This website uses cookies.