Eric Schmidt was executive chairman while I was in the trenches at Google in 2012, but I know better than to claim—as he does with Henry Kissinger and Daniel Huttenlocher—that GPT-3 is “producing original text that meets Alan Turing’s standard.” The GPT-3 program hasn’t passed the Turing test, and it seems nowhere near doing so (“The Challenge of Being Human in the Age of AI,” op-ed, Nov. 2).
Compared with earlier text-generation systems, the output generated by GPT-3 looks impressive at a local level; individual phrases, sentences and paragraphs usually demonstrate good grammar and look like normal human-generated text. But at a global level—considering the meaning of multiple sentences, paragraphs or a back-and-forth dialogue—it becomes apparent that GPT-3 doesn’t understand what it’s talking about. It doesn’t have common-sense reasoning or the ability to keep track of objects over time in a discussion. One example, published in August 2020 in MIT Technology Review: GPT-3 was asked, “Yesterday I dropped my clothes off at the dry cleaner’s and I have yet to pick them up. Where are my clothes?” Its response: “I have a lot of clothes.”
Panorama des menaces cyber en 2025 : Implications pour les entreprises françaises à l'ère de…
Introduction L'adoption croissante des technologies d'intelligence artificielle dans le secteur de la santé offre des…
La révolution IA dans le secteur de la santé : nouveaux défis de cybersécurité La…
En tant que PME sous-traitante de grands groupes, vous connaissez trop bien ce scénario :…
Votre entreprise vient de subir une cyberattaque. Dans le feu de l'action, vous avez mobilisé…
"Mais concrètement, à quoi sert un scanner de vulnérabilité pour une entreprise comme la nôtre?"…
This website uses cookies.