Have We Been Wrong About Language for 70 Years? A New Study That Politely (and Then Not So Politely) Rearranges Linguistics
For roughly seven decades, linguistics has been operating under a shared assumption so deeply embedded that questioning it felt like questioning gravity, or coffee, or whether meetings could have been emails.
Language, we were told, is built on deep, hierarchical grammatical structures. Elegant trees. Branches. Constituents nested inside constituents like linguistic Russian dolls. Every sentence, no matter how casual or chaotic, supposedly emerges from an invisible internal syntax engine doing extremely sophisticated math in your head while you’re just trying to order tacos.
And now—enter a new study from researchers affiliated with Cornell University, published in Nature Human Behaviour, calmly suggesting:
What if language works… more like LEGO bricks?
Not metaphorical LEGO bricks. Literal “you keep reusing the same chunks because they work” LEGO bricks.
This is the academic equivalent of saying, “We may have spent 70 years reverse-engineering a Swiss watch, only to discover it’s been running on Velcro.”
And the implications are… uncomfortable. For linguists. For theories of human uniqueness. For anyone who’s ever diagrammed a sentence with confidence.
Let’s talk about what this study actually says—and why it messes with some very cherished ideas.
The Sacred Grammar Tree (RIP?)
Since at least the mid-20th century, the dominant view in linguistics has been that human language is fundamentally hierarchical.
Words don’t just line up next to each other like polite commuters. They group into phrases. Those phrases combine into bigger phrases. Eventually, you get a full sentence—a towering grammatical structure assembled according to internal rules so abstract that you never consciously notice them.
Classic example:
She ate the cake.
Under the traditional view:
-
“the cake” forms a noun phrase
-
“ate the cake” becomes a verb phrase
-
that verb phrase combines with “she”
-
voilà: a sentence, fully licensed by grammar
This way of thinking gave linguistics a reassuring sense of depth. Language wasn’t just strings of words—it was architecture. Cathedrals of syntax. Hidden machinery that separated humans from every other species making noises.
Which brings us to the uncomfortable part.
Enter the Party Poopers: Morten Christiansen and Yngwie Nielsen
In this new study, Morten H. Christiansen and Yngwie A. Nielsen suggest something that sounds almost offensively simple:
Language may rely far less on elaborate grammatical hierarchies than we’ve assumed—and far more on frequently used linear patterns.
Not rules.
Not trees.
Patterns.
Sequences like:
-
“can I have a”
-
“in the middle of the”
-
“wondered if you”
These are not neat grammatical constituents. They don’t form tidy units in traditional syntax diagrams. They’re messy. Awkward. Grammatically homeless.
And yet… we use them constantly.
The Dirty Secret of Language: We Reuse Phrases Like Leftovers
One of the study’s key observations is almost embarrassing in its obviousness once you see it.
A huge amount of everyday language consists of pre-assembled chunks that speakers pull off the shelf and snap together.
You don’t generate “can I have a” from first principles every time you speak. You don’t consult a deep grammatical rulebook and compute its structure. You just… grab it.
Same with:
-
“I don’t know if”
-
“it was one of those”
-
“do you want me to”
These chunks are so familiar that they behave like single units in your mind—even though grammar textbooks don’t know what to do with them.
And here’s where the study gets interesting.
Your Brain Knows These Chunks Exist (Even If Grammar Pretends They Don’t)
Christiansen and Nielsen didn’t just speculate. They tested.
Using eye-tracking experiments and analyses of real phone conversations, they found that people process these non-constituent sequences faster once they’ve encountered them before.
That phenomenon—called priming—is a big deal in cognitive science. It means your brain has stored something as a recognizable pattern.
In other words:
-
Your brain remembers “can I have a”
-
It recognizes it
-
It processes it efficiently
-
Even though traditional grammar theory insists it’s not a “real unit”
That’s awkward.
Because it suggests that our mental representation of language includes things grammar theory has been ignoring for decades.
The Grammar-Only Model Has a Blind Spot the Size of Everyday Speech
The dominant grammar-first framework has always insisted that the important structures are hierarchical.
Linear sequences? Those were seen as surface noise. Ephemeral. Side effects of deeper rules doing the real work.
But the study flips that logic around.
If non-hierarchical sequences:
-
are frequent,
-
are stored in memory,
-
are processed efficiently,
-
and actively shape comprehension,
then ignoring them isn’t just incomplete—it’s misleading.
As Nielsen put it, traditional grammar rules simply can’t capture everything that’s going on in the mind when we use language.
Which raises an awkward question.
Have Linguists Been Studying the Wrong Thing?
For decades, linguistics has been obsessed with what language could be, rather than what language is actually used for.
Idealized sentences.
Perfect grammar.
Clean examples that behave nicely on a chalkboard.
Meanwhile, real human language is out there:
-
repetitive
-
formulaic
-
filled with half-finished constructions
-
stitched together from familiar bits
-
and shockingly unbothered by theoretical elegance
This study suggests that fluency may come less from mastering abstract rules and more from learning which chunks tend to appear next to each other.
Which, frankly, explains a lot:
-
Why children learn language faster than any formal system predicts
-
Why idioms survive despite being “illogical”
-
Why second-language learners often sound grammatical but unnatural
-
Why conversation feels effortless even though grammar textbooks are exhausting
The LEGO Theory of Language (and Why It Makes Academics Nervous)
Christiansen uses a metaphor that feels almost dangerous in its simplicity: LEGO bricks.
Instead of building sentences from scratch every time, speakers reuse pre-built pieces:
-
phrase fragments
-
word sequences
-
probabilistic patterns
You snap them together as needed. Some pieces are bigger. Some are smaller. Some don’t fit grammar diagrams but fit real conversation perfectly.
This doesn’t mean grammar disappears entirely. It just stops being the sole architect.
Think of grammar less like a rulebook and more like guidelines that emerge from repeated usage.
That shift has consequences.
The Uncomfortable Animal Question
Here’s where things get spicy.
One reason hierarchical grammar has been so cherished is that it supposedly separates humans from other animals.
Animals communicate, sure—but not with recursive, layered syntax. Not with infinite generativity.
Except… if human language relies more on linear patterns and less on deep hierarchy than we thought, that gulf starts to narrow.
Christiansen himself hints at this:
If you don’t need the more complex machinery of hierarchical syntax, the difference between human language and animal communication may be smaller than previously assumed.
Translation:
The throne of human linguistic exceptionalism just wobbled.
Not collapsed.
But definitely squeaked.
This Doesn’t Mean Grammar Is Fake (Sorry, Internet)
Let’s be clear: this study does not say grammar doesn’t exist.
It says grammar might not be the central, all-powerful engine we imagined.
Instead:
-
Grammar may emerge from usage
-
Patterns may come first
-
Structure may be flatter than assumed
-
Complexity may arise from repetition, not abstraction
Which, ironically, makes language more impressive—not less.
Because it suggests humans achieve infinite expressiveness using surprisingly efficient tools.
Why This Matters Outside Academia (Yes, It Actually Does)
This isn’t just a turf war inside linguistics departments.
If language works the way this study suggests, it affects:
-
How children learn to speak (less rule-learning, more pattern absorption)
-
How adults learn second languages (stop obsessing over grammar drills)
-
How AI language models are trained (spoiler: this aligns suspiciously well with how they already work)
-
How we think about cognition itself
It reframes language as something adaptive, probabilistic, and experience-driven rather than rule-bound and pristine.
Which feels… correct.
The Quiet Irony: We Already Knew This From Talking to Each Other
The funniest part of this entire story is that ordinary speakers have always behaved as if this were true.
We reuse phrases.
We finish each other’s sentences.
We speak in templates.
We rely on rhythm, expectation, and familiarity.
We don’t mentally diagram syntax mid-conversation.
Linguistics just took a while to notice.
So, Have We Been Wrong for 70 Years?
Not wrong in the sense of “throw everything out.”
Wrong in the sense of:
-
Over-prioritizing elegance
-
Under-valuing messiness
-
Confusing theoretical neatness with cognitive reality
The grammar-first model wasn’t useless—it just wasn’t complete.
And this study doesn’t burn the house down. It rearranges the furniture and asks why the couch was nailed to the ceiling in the first place.
Final Thought: Language Isn’t a Cathedral—It’s a Workshop
Language isn’t built once and admired forever.
It’s assembled on the fly.
From familiar pieces.
By imperfect humans.
Under time pressure.
With social goals.
Using whatever works.
That doesn’t make it less remarkable.
It makes it more human.
And if it took 70 years for linguistics to admit that, well—language, at least, will find a way to say “better late than never.”
Comments
Post a Comment