A new Stanford study suggests AI still has a bias problem

A new Stanford study suggests AI still has a bias problem

Stanford’s annual report gives a snapshot of both the research and application of AI in its various forms.

A new report from the Stanford Institute for Human-Centered Artificial Intelligence (HAI) describes the rapid maturation of the AI industry in 2021, but also contains some sobering news about AI bias.

Natural language processing (NLP) models continued growing larger over the past year, according to the report. While that’s yielded impressive gains in language skill, it’s also failed to completely rid the AI of the nagging problems of toxicity and bias.

The AI Index 2022 Annual Report” measures and evaluates the yearly progress of AI by tracking it from numerous angles including R&D, ethics, and policy and government. Here are the biggest takeaways.


Some of the most significant developments in AI over the past few years have occurred in the performance of natural language models–that is, neural networks trained to read, generate, and reason about language. Starting with the breakthrough BERT model developed by Google researchers in 2018, a steady stream of progressively larger language models, using progressively larger training data sets, has continued to see impressive (sometimes shocking) performance gains. NLP models now range into the hundreds of billions of parameters (connection points in a neural network where computations are run on input data), and the best ones exceed human levels of language comprehension and speech generation.

Read more