Somehow, Mira Murati can forthrightly discuss the dangers of AI while making you feel like it’s all going to be OK.
Murati is chief technology officer at OpenAI, leading the teams behind DALL-E, which uses AI to create artwork based on prompts, and ChatGPT, the wildly popular AI chatbot that can answer complex questions with eerily humanlike skill.
ChatGPT captured the public imagination upon its release in late November. While some schools are banning it, Microsoft announced a $10 billion investment in the company and Google issued a “code red,” fretting that the technology could disrupt its search business. “As with other revolutions that we’ve gone through, there will be new jobs and some jobs will be lost…” Murati told Trevor Noah last fall of the impact of AI, “but I’m optimistic.”
For most of January, ChatGPT surpassed Bitcoin among popular search terms, according to Google Trends. All the attention has meant the privately held San Francisco–based startup—with 375 employees and little in the way of revenue—now has a valuation of roughly $30 billion. Murati spoke to TIME about ChatGPT’s biggest weakness, the software’s untapped potential, and why it’s time to move toward regulating AI.
First, I want to congratulate you and your team on the recent news that ChatGPT scored a passing grade on a U.S. medical-licensing exam, a Wharton Business School MBA exam, and four major university law-school exams. Does it feel like you have a brilliant child?
We weren’t anticipating this level of excitement from putting our child in the world. We, in fact, even had some trepidation about putting it out there. I’m curious to see the areas where it’ll start generating utility for people and not just novelty and pure curiosity.
I asked ChatGPT for a good question to ask you. Here’s what it said: “What are some of the limitations or challenges you have encountered while working with ChatGPT and how have you overcome them?”
That is a good question. ChatGPT is essentially a large conversational model—a big neural net that’s been trained to predict the next word—and the challenges with it are similar challenges we see with the base large language models: it may make up facts.