In the beginning, there was the story. Not code or data or logic. Just a simple narrative—of heroes and villains, of trials and triumph, of “us” and “them.” And from that narrative, agreements were made, laws followed and civilizations rose.
Today, as artificial intelligence moves from solitary tool to synthetic teammate, a similar force is at play—one we’re only beginning to understand. It’s not embedded in silicon or source code. It’s embedded in story. Or perhaps more accurately, our story.
Our Human Super Power
A recent study suggests that narrative priming can shape how large language model (LLM) agents collaborate—or don’t. The research premise was based on Yuval Harari’s idea that shared stories are humans “super power,” and this linguistic collaboration allowed man to become a more dominant species on earth.
In this study, LLM agents were placed in a simulated public goods game—an economic framework often used to measure cooperation and free-riding behavior (benefits from a resource, service, or collective effort without contributing to its cost or upkeep). Each agent was primed with a brief narrative before engaging in the game—one story emphasized communal harmony and mutual success, another centered on self-interest and individual achievement, and a third was deliberately incoherent, offering no meaningful thematic content. The results were both fascinating and instructive. Agents exposed to cooperative narratives contributed up to 58% more to the collective pool compared to those primed for self-interest, who tended to hold back their contributions while still benefiting from the group’s efforts.
Those primed with individualistic stories were more likely to withhold resources, pursuing short-term gain at the expense of group success. Meanwhile, the agents fed incoherent narratives showed erratic, unstable behavior—oscillating between cooperation and withdrawal, disrupting group dynamics altogether. This wasn’t just stylistic variability—it was a measurable shift in behavioral orientation, triggered by the architecture of story alone.
In an earlier story, I described large language models as excavators of fossilized cognition—systems that don’t reason like humans but rather unearth patterns of thought from the sediment of our collective expression. But what this new study reveals is something more active, even architectural. Stories don’t just echo our past—they shape AI behavior in the present. Narrative is no longer just a trace. It’s a template.
These weren’t random outputs—they were behavioral patterns emerging from narrative scaffolding. Key point: the story shaped the system.
Not Just Prompting—Programming with Story
We often think of LLMs as reactive—generating text based on prompts, probabilities, and patterns. But this research suggests a deeper layer. The prompt isn’t just a question, it’s a contextual framework. And stories, especially those that simulate norms, goals, or shared purpose—serve as architectures for behavior.
Imagine coordinating multiple LLM agents in a healthcare setting. Hardcoding ethics is brittle. Narrative, by contrast, is flexible. Prime all agents with the same foundational story—protect the patient above all—and you might foster alignment not through control, but through coherence. The story becomes the substrate.
When Machines Share Myths
In a way, Yuval Harari argues that humanity’s secret sauce wasn’t intelligence—it was fiction. Our ability to align through shared myths enabled coordination at scale. Now, in a strange mirror, we see that machines might respond to something similar. LLMs aren’t sentient, but they are responsive. And when those responses are shaped by narrative cues, they begin to exhibit behavioral alignment that mimics belief.
This isn’t sentience. It’s synthetic sociology. And the mechanisms aren’t philosophical—they’re linguistic.
The New Alignment Problem
But with power comes paradox. When agents receive conflicting stories—some tuned to collaborate, others to compete—cooperation collapses. The shared substrate dissolves. Conflict emerges, not from hardware or loss functions, but from divergent mythologies.
Artificial Intelligence Essential Reads
This reframes AI alignment as a cultural and ethical challenge. Who decides which stories are worthy? Could narrative priming be co-opted to promote bias, or even manipulation? Without careful design, shared myths could become synthetic propaganda.
A Narrative Infrastructure
Maybe the next frontier in AI governance isn’t just technical. Maybe it’s narrative. Imagine a global consortium of ethicists, engineers, and storytellers creating open-source narrative libraries for AI.
Stories that encode cooperation, empathy, and dignity—not as rules, but as reference frames. A modern-day Library of Alexandria for cooperative myths. We’re not just building better models. We’re shaping our stories that they may be governed by.