We Don’t Actually Know If AI Is Taking Over Everything

ai everything
ai everything

Since the release of ChatGPT last year, I’ve heard some version of the same thing over and over again: What is going on? The rush of chatbots and endless “AI-powered” apps has made starkly clear that this technology is poised to upend everything—or, at least, something. Yet even the AI experts are struggling with a dizzying feeling that for all the talk of its transformative potential, so much about this technology is veiled in secrecy.

It isn’t just a feeling. More and more of this technology, once developed through open research, has become almost completely hidden within corporations that are opaque about what their AI models are capable of and how they are made. Transparency isn’t legally required, and the secrecy is causing problems: Earlier this year, The Atlantic revealed that Meta and others had used nearly 200,000 books to train their AI models without the compensation or consent of the authors.

Now we have a way to measure just how bad AI’s secrecy problem actually is. Yesterday,  Stanford University’s Center for Research on Foundational Models launched a new index that tracks the transparency of 10 major AI companies, including OpenAI, Google, and Anthropic. The researchers graded each company’s flagship model based on whether its developers publicly disclosed 100 different pieces of information—such as what data it was trained on, the wages paid to the data and content-moderation workers who were involved in its development, and when the model should not be used. One point was awarded for each disclosure. Among the 10 companies, the highest-scoring barely got more than 50 out of the 100 possible points; the average is 37. Every company, in other words, gets a resounding F.

Source