UNLIKE THE CAREFULLY scripted dialogue found in most books and movies, the language of everyday interaction tends to be messy and incomplete, full of false starts, interruptions, and people talking over each other.
From casual conversations between friends to bickering between siblings to formal discussions in a boardroom, authentic conversation is chaotic. It seems miraculous that anyone can learn a language at all, given the haphazard nature of the linguistic experience.
For this reason, many language scientists — including Noam Chomsky, a founder of modern linguistics — believe that language learners require a kind of glue to rein in the unruly nature of everyday language. And that glue is grammar: a system of rules for generating grammatical sentences.
Children must have a grammar template wired into their brains to help them overcome the limitations of their language experience — or so the thinking goes.
This template, for example, might contain a “super-rule” that dictates how new pieces are added to existing phrases. Children then only need to learn whether their native language is one, like English, where the verb goes before the object (as in “I eat sushi”), or one like Japanese, where the verb goes after the object (in Japanese, the same sentence is structured as “I sushi eat”).
But new insights into language learning come from an unlikely source: artificial intelligence. A new breed of large AI language models can write newspaper articles, poetry, and computer code and answer questions truthfully after being exposed to vast amounts of language input. And even more astonishingly, they all do it without the help of grammar.