New AI chatbot is scary good

scary chatbot
scary chatbot

The newest AI wonder, ChatGPT — the latest in a line of incredibly quickly-evolving AI text generators — is causing jaws to drop and brows to furrow.

What’s happening: Users are telling ChatGPT to rewrite literary classics in new styles or to produce performance reviews of their colleagues, and the results can be scarily good.

Why it matters: ChatGPT displays AI’s power and fun. It could also make life difficult for everyone — as teachers and bosses try to figure out who did the work and all of society struggles even harder to discern truth from fiction.

Driving the news: Last week’s public release of ChatGPT came from OpenAI, which had previously set benchmarks in this field with GPT3 and its predecessors. (There’s also an unofficial Twitter bot for those who don’t want to bother with signing up for the service.)

Yes, but: The high quality of ChatGPT’s responses adds to the fun, but also highlights the risks associated with AI.

  • As we just wrote last week, a big pitfall for today’s most advanced AI programs is their ability to be « confidently wrong, » presenting falsehoods authoritatively.
  • That’s certainly the case with ChatGPT, which can weave a convincing tale about a completely fictitious Ohio-Indiana war.
  • Nightmare scenarios involve fears that text from AI engines could be used to inundate the public with authoritative-sounding information to support conspiracy theories and propaganda.
  • OpenAI chief Sam Altman says some of what people interpret as « censorship » — when ChatGPT says it won’t tackle a user request — is actually an effort to keep the bot from spewing out false info as fact.