ChatGPT

The Creator of ChatGPT Thinks AI Should Be Regulated

Somehow, Mira Murati can forthrightly discuss the dangers of AI while making you feel like it’s all going to be OK.

Murati is chief technology officer at OpenAI, leading the teams behind DALL-E, which uses AI to create artwork based on prompts, and ChatGPT, the wildly popular AI chatbot that can answer complex questions with eerily humanlike skill.

ChatGPT captured the public imagination upon its release in late November. While some schools are banning it, Microsoft announced a $10 billion investment in the company and Google issued a “code red,” fretting that the technology could disrupt its search business. “As with other revolutions that we’ve gone through, there will be new jobs and some jobs will be lost…” Murati told Trevor Noah last fall of the impact of AI, “but I’m optimistic.”

For most of January, ChatGPT surpassed Bitcoin among popular search terms, according to Google Trends. All the attention has meant the privately held San Francisco–based startup—with 375 employees and little in the way of revenue—now has a valuation of roughly $30 billion. Murati spoke to TIME about ChatGPT’s biggest weakness, the software’s untapped potential, and why it’s time to move toward regulating AI.

First, I want to congratulate you and your team on the recent news that ChatGPT scored a passing grade on a U.S. medical-licensing exam, a Wharton Business School MBA exam, and four major university law-school exams. Does it feel like you have a brilliant child?

We weren’t anticipating this level of excitement from putting our child in the world. We, in fact, even had some trepidation about putting it out there. I’m curious to see the areas where it’ll start generating utility for people and not just novelty and pure curiosity.

I asked ChatGPT for a good question to ask you. Here’s what it said: “What are some of the limitations or challenges you have encountered while working with ChatGPT and how have you overcome them?”

That is a good question. ChatGPT is essentially a large conversational model—a big neural net that’s been trained to predict the next word—and the challenges with it are similar challenges we see with the base large language models: it may make up facts.

Source

Veille-cyber

Share
Published by
Veille-cyber

Recent Posts

Directive NIS 2 : Comprendre les obligations en cybersécurité pour les entreprises européennes

Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…

1 jour ago

NIS 2 : entre retard politique et pression cybersécuritaire, les entreprises dans le flou

Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…

2 jours ago

Quand l’IA devient l’alliée des hackers : le phishing entre dans une nouvelle ère

L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…

3 jours ago

APT36 frappe l’Inde : des cyberattaques furtives infiltrent chemins de fer et énergie

Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…

3 jours ago

Vulnérabilités des objets connectés : comment protéger efficacement son réseau en 2025

📡 Objets connectés : des alliés numériques aux risques bien réels Les objets connectés (IoT)…

6 jours ago

Cybersécurité : comment détecter, réagir et se protéger efficacement en 2025

Identifier les signes d'une cyberattaque La vigilance est essentielle pour repérer rapidement une intrusion. Certains…

6 jours ago

This website uses cookies.