ChatGPT

The Creator of ChatGPT Thinks AI Should Be Regulated

Somehow, Mira Murati can forthrightly discuss the dangers of AI while making you feel like it’s all going to be OK.

Murati is chief technology officer at OpenAI, leading the teams behind DALL-E, which uses AI to create artwork based on prompts, and ChatGPT, the wildly popular AI chatbot that can answer complex questions with eerily humanlike skill.

ChatGPT captured the public imagination upon its release in late November. While some schools are banning it, Microsoft announced a $10 billion investment in the company and Google issued a “code red,” fretting that the technology could disrupt its search business. “As with other revolutions that we’ve gone through, there will be new jobs and some jobs will be lost…” Murati told Trevor Noah last fall of the impact of AI, “but I’m optimistic.”

For most of January, ChatGPT surpassed Bitcoin among popular search terms, according to Google Trends. All the attention has meant the privately held San Francisco–based startup—with 375 employees and little in the way of revenue—now has a valuation of roughly $30 billion. Murati spoke to TIME about ChatGPT’s biggest weakness, the software’s untapped potential, and why it’s time to move toward regulating AI.

First, I want to congratulate you and your team on the recent news that ChatGPT scored a passing grade on a U.S. medical-licensing exam, a Wharton Business School MBA exam, and four major university law-school exams. Does it feel like you have a brilliant child?

We weren’t anticipating this level of excitement from putting our child in the world. We, in fact, even had some trepidation about putting it out there. I’m curious to see the areas where it’ll start generating utility for people and not just novelty and pure curiosity.

I asked ChatGPT for a good question to ask you. Here’s what it said: “What are some of the limitations or challenges you have encountered while working with ChatGPT and how have you overcome them?”

That is a good question. ChatGPT is essentially a large conversational model—a big neural net that’s been trained to predict the next word—and the challenges with it are similar challenges we see with the base large language models: it may make up facts.

Source

Veille-cyber

Share
Published by
Veille-cyber

Recent Posts

Les 7 menaces cyber les plus fréquentes en entreprise

Introduction La cybersécurité est devenue une priorité stratégique pour toutes les entreprises, grandes ou petites.…

5 jours ago

Cybersécurité : Vers une montée en compétence des établissements de santé grâce aux exercices de crise

Cybersécurité : les établissements de santé renforcent leur défense grâce aux exercices de crise Face…

2 semaines ago

Règlement DORA : implications contractuelles pour les entités financières et les prestataires informatiques

La transformation numérique du secteur financier n'a pas que du bon : elle augmente aussi…

2 semaines ago

L’IA : opportunité ou menace ? Les DSI de la finance s’interrogent

L'IA : opportunité ou menace ? Les DSI de la finance s'interrogent Alors que l'intelligence…

2 semaines ago

Telegram menace de quitter la France : le chiffrement de bout en bout en ligne de mire

Telegram envisage de quitter la France : le chiffrement de bout en bout au cœur…

2 semaines ago

Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le secteur financier

Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le…

3 semaines ago

This website uses cookies.