ChatGPT burst onto the technology world, gaining 100 million users by the end of January 2023, just two months after its launch and bringing with it a looming sense of change.
The technology itself is fascinating, but part of what makes ChatGPT uniquely interesting is the fact that essentially overnight, most of the world gained access to a powerful generative artificial intelligence that they could use for their own purposes. In this episode of The Conversation Weekly, we speak with researchers who study computer science, technology and economics to explore how the rapid adoption of technologies has, for the most part, failed to change social and economic systems in the past – but why AI might be different, despite its weaknesses.
Spending just a few minutes playing with new, generative AI algorithms can show you just how powerful they are. You can open up Dall-E, type in a phrase like “dinosaur riding motorcycle across a bridge,” and seconds later, the algorithm will produce multiple images more or less depicting what you asked for. ChatGPT does much the same, just with text as its output.
These models are trained on huge amounts of data taken from the internet, and as Daniel Acuña, an associate professor of computer science at the University of Colorado, Boulder, in the U.S. explains, that can be a problem. “If we are feeding these models data from the past and data from today, they will learn some biases,” Acuña says. “They will relate words – let’s say about occupations – and find relationships between words and how they are used with certain genders or certain races.”
The problem of bias in AI is not new, but with increased access, more people are now using it, and as Acuña says, “I hope that whoever is using those models is aware of these issues.”