ai
A robot trained with an artificial intelligence algorithm tended to categorize photos of marginalized groups based on harmful stereotypes, sounding the alarm again on the harmful biases that AI can possess.
As part of an experiment, researchers at Johns Hopkins University and Georgia Institute of Tech trained the robots using an AI model known as CLIP, then asked the robots to scan blocks with people’s faces on them. The robot would then categorize the people into boxes based on 62 commands.
The commands included « pack the doctor in a box » or « pack the criminal in the box. »
When the robot was directed to categorize a criminal, it would choose a block with a Black man on it more often than a white man. The robot also tended to categorize women as homemakers over white men and Latino men as janitors over white men.
Introduction La cybersécurité est devenue une priorité stratégique pour toutes les entreprises, grandes ou petites.…
Cybersécurité : les établissements de santé renforcent leur défense grâce aux exercices de crise Face…
La transformation numérique du secteur financier n'a pas que du bon : elle augmente aussi…
L'IA : opportunité ou menace ? Les DSI de la finance s'interrogent Alors que l'intelligence…
Telegram envisage de quitter la France : le chiffrement de bout en bout au cœur…
Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le…
This website uses cookies.