Last week, Twitter shared research showing that the platform’s algorithms amplify tweets from right-of-center politicians and news outlets at the expense of left-leaning sources. Rumman Chowdhury, the head of Twitter’s machine learning, ethics, transparency, and accountability team, said in an interview with Protocol that while some of the behavior could be user-driven, the reason for the bias isn’t entirely clear.
“We can see that it is happening. We are not entirely sure why it is happening,” Chowdhury said. “When algorithms get put out into the world, what happens when people interact with it — we can’t model for that. We can’t model for how individuals or groups of people will use Twitter, what will happen in the world in a way that will impact how people use Twitter.”
Mots-clés : cybersécurité, sécurité informatique, protection des données, menaces cybernétiques, veille cyber, analyse de vulnérabilités, sécurité des réseaux, cyberattaques, conformité RGPD, NIS2, DORA, PCIDSS, DEVSECOPS, eSANTE, intelligence artificielle, IA en cybersécurité, apprentissage automatique, deep learning, algorithmes de sécurité, détection des anomalies, systèmes intelligents, automatisation de la sécurité, IA pour la prévention des cyberattaques.






