Chatgpt, a chatbot developed by Openai, an American firm, can give passable answers to questions on everything from nuclear engineering to Stoic philosophy. Or at least, it can in English. The latest version, Chatgpt-4, scored 85% on a common question-and-answer test. In other languages it is less impressive. When taking the test in Telugu, an Indian language spoken by nearly 100m people, for instance, it scored just 62%.
Openai has not revealed much about how Chatgpt-4 was built. But a look at its predecessor, Chatgpt-3, is suggestive. Large language models (llms) are trained on text scraped from the internet, on which English is the lingua franca. Around 93% of Chatgpt-3’s training data was in English. In Common Crawl, just one of the datasets on which the model was trained, English makes up 47% of the corpus, with other (mostly related) European languages accounting for 38% more. Chinese and Japanese combined, by contrast, made up just 9%. Telugu was not even a rounding error.