It’s tempting, and perhaps even convenient, to imagine artificial intelligence as a synthetic mind. But what if that’s all wrong and even counter-productive? Let’s take a closer look.
Large language models (LLMs) talk, write, and reason with an ease that feels disarmingly familiar. Their architectures borrow terms like “neurons” and “learning.” So it’s no surprise we keep describing them as artificial brains.
But that metaphor, while convenient, is misleading. LLMs don’t think like we do. They don’t feel, remember, or grow. Their brilliance doesn’t come from mimicking the brain—it comes from being something else entirely.
Not a Mind, But a Scaffold
Here’s the truth, like it or not. LLMs are not neurons in silicon. They’re hyperdimensional language scaffolds—structures built from patterns in language, that generate coherent responses by predicting the next most likely word. Bear with me—that’s a mouthful, but it’s key.
They don’t have thoughts and they don’t have selves. They maintain a “statistical balance” across shifting input patterns. And this means that they constantly adjust to maintain internal consistency within a conversation, without any real understanding.
This “intelligence” isn’t born of biology. It’s a structure that emerges from probability and pattern. And this is something fundamentally different from cognition as we know it and held together not by meaning, but by coherence. And coherence may well become the new “C-word” in AI that’s quietly replacing the overused and often misunderstood term cognition as the more accurate descriptor of what these systems truly do.
The Mirage of Anthropomorphism
Humans are meaning-makers. When we encounter something that behaves intelligently, we instinctively assign it agency. That’s why we name storms, talk to pets, and expect LLMs to think like people.
But LLMs are not little minds in boxes. They don’t “intend.” They don’t “know.” What they produce is the most statistically likely continuation of what they’ve seen before. It looks smart. It often feels smart. But there’s no internal world behind the words. This isn’t a flaw—it’s the design.
DropIn, DropOut, and the Neuroplasticity Trap
A 2025 paper titled “Neuroplasticity in Artificial Intelligence“ explores biologically inspired techniques for making AI more adaptive. It introduces concepts like “dropin” (adding artificial neurons) and “dropout” (removing them), drawing inspiration from how human brains grow and prune connections. These tools aim to make AI more flexible and resilient.
But even these advancements don’t make AI brain-like. They improve adaptability, but remain grounded in data patterns, not conscious processing. Framing these innovations as “plasticity” can reinforce the mistaken idea that AI systems operate like human cognition.
Coherence Without Consciousness
Here’s the quote for conferences and cocktail parties. “Brains generate meaning. LLMs generate coherence.”
You think, reflect, and imagine and the LLM doesn’t, it just processes on. Its outputs are governed by math, not mind. Every word it generates is the most probable next step, a statistical move based on patterns learned from massive data sets.
While some AI systems mimic goal-driven behavior—like reinforcement learning agents or hybrid models—LLMs, which power today’s conversational AI, operate on a different principle. They aren’t reasoning, they’re aligning language to previous patterns.
It doesn’t understand. But it doesn’t need to. That’s the revolution.
Don’t Copy the Brain—Transcend It
The future of AI isn’t in copying human minds. It’s in creating entirely new forms of cognition.
Brains have constraints—evolved hardware with limited speed, bounded memory, and embodied context. LLMs are something different, very different. They are disembodied intelligence (maybe coherence is now the better word) unbound by time, fatigue, or linear reasoning. They don’t replicate human thought, they point to a new category of intelligence.
And that difference isn’t a bug. It’s the foundation of transformation.
Beating a Dead Computer
I know that I might be “beating a dead computer” here. But the metaphor still dominates our collective thinking, and that’s the problem. We’re not witnessing the birth of synthetic humans. We’re encountering a new form of thought. One that doesn’t live in skulls, but in servers. One that doesn’t think with neurons, but with vectors—mathematical relationships between words and ideas.
Artificial Intelligence Essential Reads
And this opens up entirely new possibilities. Imagine AI systems that refine global weather models with a precision we can’t match. Or generate mathematical proofs that defy our imagination. Or synthesize centuries of legal precedent into novel insights. These aren’t just tools, they’re scaffolds we can climb.
If we keep calling LLMs brains, we’ll keep misunderstanding them. And if we keep trying to make them human, we’ll miss the opportunity to let them become something else. This shift isn’t from natural to artificial. It’s from familiar to fundamentally new. And that’s the real story, not the simulation of ourselves, but the discovery of something other.