Large Language Mistake(?) — A.I. “Intelligence” Neuroscience Problems

November 25, 2025 in News by RBN Staff

 

 

Source: TheVerge.com
[Subscription required for full article, below is captured excerpt of intro.]

Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.

By Benjamin Riley

 

“Developing superintelligence is now in sight,” says Mark Zuckerberg, heralding the “creation and discovery of new things that aren’t imaginable today.” Powerful AI “may come as soon as 2026 [and will be] smarter than a Nobel Prize winner across most relevant fields,” says Dario Amodei, offering the doubling of human lifespans or even “escape velocity” from death itself. “We are now confident we know how to build AGI,” says Sam Altman, referring to the industry’s holy grail of artificial general intelligence — and soon superintelligent AI “could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own.”

Should we believe them? Not if we trust the science of human intelligence, and simply look at the AI systems these companies have produced so far.

The common feature cutting across chatbots such as OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, and whatever Meta is calling its AI product this week are that they are all primarily “large language models.” Fundamentally, they are based on gathering an extraordinary amount of linguistic data (much of it codified on the internet), finding correlations between words (more accurately, sub-words called “tokens”), and then predicting what output should follow given a particular prompt as input. For all the alleged complexity of generative AI, at their core they really are models of language.

The problem is that according to current neuroscience, human thinking is largely independent of human language — and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own.

 

The AI hype machine relentlessly promotes the idea that we’re on the verge of creating something as intelligent as humans, or even “superintelligence” that will dwarf our own cognitive capacities. If we gather tons of data about the world, and combine this with ever more powerful computing power (read: Nvidia chips) to improve our statistical correlations, then presto, we’ll have AGI. Scaling is all we need.

But this theory is seriously scientifically flawed. LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build.

We use language to think, but that does not make language the same as thought

Last year, three scientists published a commentary in the journal Nature titled, with admirable clarity, “Language is primarily a tool for communication rather than thought.” Co-authored by Evelina Fedorenko (MIT), Steven T. Piantadosi (UC Berkeley) and Edward A.F. Gibson (MIT), the article is a tour de force ….

 

ARTICLE CONTINUES AT VERGE.COM – SUBSCRIPTION REQUIRED