The Collapse of Narrative Attractors

Watching the cathedral of certainty crumble while the rest of us quietly bolt the next floor on

You’ve felt this. The same people who promised social media would democratise information now warn it’s destroying democracy. The same voices who said smartphones would liberate us from our desks now fret about screen addiction. The experts who assured us globalisation would lift all boats are now explaining why supply chains are fragile and manufacturing should come home.

It’s not just that they were wrong—it’s the whiplash. The confident certainty followed by equally confident reversals, as if the previous position never existed. As if the complexity was always obvious to anyone paying attention.

But here’s what’s actually happening: you’re watching narrative attractors collapse in real time.

When the Maps Stop Working

A prominent AI researcher engages with a comment challenging his framework of “AI capabilities.” The commenter argues that debating whether LLMs are “intelligent” misses the point—the real transformation is architectural, not scalar. Intelligence isn’t something models “climb toward” but emerges from distributed cognitive systems.

The researcher’s response: appeal to institutional authority, “We address this in section 3.”

The commenter responds with detailed analysis drawing on extended mind theory,1 distributed cognition research,2 and ethnographic questions about how doctors actually use AI tools. Substantive, referenced, directly engaging with the systemic questions.

The researcher’s final response: “Are you using LLMs to write this? Is this a bit?” Performative bafflement. Unable to engage with the argument because it doesn’t fit his narrative frame.

The commenter walks away: “The point stands. Calling LLMs ‘intelligent’ is a category error.”

This isn’t just academic disagreement. It’s what happens when you try to navigate distributed coordination using “Industrial Revolution” maps. The Thames is still there, but everything else has shifted.

For years, public conversation about AI orbited two massive gravity wells: super-intelligence-as-savior and super-intelligence-as-existential-risk. These stories were tidy, cinematic, morally unambiguous. They gave everyone a role: accelerate the Rapture or prevent the Apocalypse.

Now the actual artefacts are here, and they refuse to stay in either well. They leak out sideways: helpful, tacky, biased, profitable, whimsical, spammy, occasionally sublime. The result is cognitive dissonance: emotional circuitry tuned for binary outcomes being fed a spectrum.

This reflection is distilled from an actual exchange; the speakers remain anonymous. What looked like a disagreement was, in fact, a clash between attractors: one shaped by an older, centralised model of intelligence, the other by a distributed, systemic view. Neither side was wrong in its own terms—but the arguments slid past each other, as if spoken in different dialects.

This is the pattern beneath the pattern: the old attractor assumes intelligence lives inside things (brains, models), while the new one sees it as something that happens between them. The former leads to debates about ‘how smart AI is’; the latter to questions about how systems redistribute cognition—and who bears the cost when they fail.

The Boring Apocalypse

The end of the world as we know it doesn’t look like Hollywood promised. No dramatic moment when everything changes. No clear before and after.

Instead: your doctor starts using AI diagnostic tools without fanfare. Your bank updates its fraud detection algorithms. Your kid’s teacher discovers ChatGPT helps with differentiated lesson planning. The paralegal at your friend’s firm finds LLMs draft better discovery motions than the associates. The small business owner down the street automated their invoicing.

While experts debate alignment theory and regulatory frameworks, there’s already an underground railroad of practical adaptation happening. People aren’t waiting for permission or consensus—they’re just solving problems. The 15-year-old storyboarding with Midjourney. The consultant using Claude to structure client presentations. The academic using GPT-4 to generate research questions they hadn’t thought to ask.

None of these people are waiting for alignment papers or singularity countdown clocks. They’re treating AI like PageMaker ’89: glitchy, powerful, plastic in human hands. While utopians and doomers argue about whether AI will save or doom us—both assuming intelligence is something you either have more or less of—actual users are discovering that these systems work more like linguistic partners than computational engines. The politics of the tool are being worked out in use, not in manifestos. But the people making governance decisions often aren’t the people using the tools.

This is the revolution: not the technology, but the million mundane adaptations no one’s tracking. The boring apocalypse is administrative.

The same pattern appears everywhere: while governments debate carbon pricing frameworks, companies are quietly renegotiating supplier relationships around climate risk. While international bodies draft battery mineral agreements, automakers are already securing lithium through direct partnerships. The coordination happens despite the absence of consensus about what it means or who’s in charge.

This pattern isn’t unique to our moment. In 1903, most people didn’t notice the Wright Brothers had just launched the age of aviation—the most authoritative voices had been confidently wrong about what was immediately possible. But there’s another historical pattern: sometimes everyone knows the old stories have broken, but no one knows which new story will win. The 1970s felt exactly like this—multiple possible futures competing for dominance while existing institutions seemed exhausted. Carter literally went on TV to tell Americans they were experiencing a “crisis of confidence.” The Weimar Republic faced similar narrative collapse, with competing visions (communist revolution, fascist takeover, liberal democracy) all seeming possible simultaneously.

The same pattern played out with every transformative technology. While institutions debated whether automobiles would frighten horses or liberate workers, mechanics were learning to fix engines and entrepreneurs were opening gas stations. While policymakers worried about telephone privacy, switchboard operators were creating the infrastructure of modern communication.

We metabolised steam, electricity, and the web the same messy way, but most people weren’t watching the debugging in real time. With AI, the sausage factory is on Twitch 24/7, so ordinary engineering iteration gets mistaken for reckless hubris.

Why Now: The Three Forces

This isn’t just about AI. Three deeper dynamics are converging to make “the story of now” feel persistently contested:

  • Affordance Explosion: We’re in a moment where multiple possible futures seem available simultaneously. AI, climate tech, distributed governance, digital sovereignty—each presents a different constraint domain around which a new equilibrium might form. But without a clear dominant constraint, there’s no single “arc” to align stories around. The future feels foggy not because it’s unknowable, but because it’s overdetermined.
  • Narrative Lag: Our dominant stories—“digital transformation,” “Fourth Industrial Revolution,” “globalization”—are leftovers from earlier transitions. The present is shaped by tensions these frameworks can’t capture. We’re using Industrial Revolution categories to understand distributed coordination. The AI debate gets stuck using computational metaphors (“intelligence as noun”) that miss what’s happening in practice.
  • Epistemic Metabolisation: Traditional gatekeepers haven’t disappeared, but they’ve been absorbed into new coordination systems. Expertise is transforming from performed authority to infrastructural function. Scientists, journalists, consultants still exist, but now operate as components in larger networks rather than sovereign authorities. Every actor can curate credible narratives without institutional validation. LLMs accelerate this by generating plausible explanations on demand.

These dynamics interact: affordance explosion creates interpretive challenges that exceed traditional authorities’ capacity, forcing the metabolisation of expertise into new coordination systems that work despite permanent disagreement about who’s in charge of meaning.

The Underground Railroad of Adaptation

Consider how the actual AI transformation is unfolding. The EU AI Act, White House executive orders, foundation model licenses—all conceived when legislators thought they were regulating godlike minds. Now they’re policing souped-up autocomplete that’s excellent at phishing emails.

Meanwhile, the real integration happens quietly: the consultant who discovered AI can synthesise client interviews faster than human analysis. The teacher using it to generate personalised practice problems. The lawyer who found it drafts better contracts than junior associates. The designer who uses it for rapid prototyping.

They’re not debating consciousness or alignment. They’re solving Tuesday’s problems with Wednesday’s tools. The revolution isn’t replacement—it’s reorganisation that functions in the dark, without needing anyone to agree on who controls the lights.

This pattern extends beyond technology. Consider how expertise itself is being reorganised. The same research institutions that spent decades building authority through peer review and credentialing now find their knowledge integrated into systems that work regardless of whether users understand the underlying theories.

Traditional consulting illustrates the shift clearly. Instead of selling expertise-as-authority (“hire us because we’re smart”), firms increasingly embed expertise-as-infrastructure (“hire us because our systems work”). They succeed not by convincing clients they’re right about problem definitions, but by delivering results despite permanent disagreement about what the problems actually are.

What Comes Next: Domestication Drift

Probably drift toward domestication narratives: stories that normalise AI as infrastructure rather than miracle or menace. Think seatbelts, not salvation. Building codes, not Butlerian Jihad.

This shift disappoints the eschatologically inclined, but it’s where durable governance happens. Once rhetorical temperature drops, oxygen exists for smaller, solvable problems: labor retraining standards, provenance requirements for synthetic media, disclosure norms in scientific publishing.

Several predictable consequences follow:

  • Status Anxiety Among Pundits: If the technology isn’t epochal, the cottage industry of grand pronouncements looks ridiculous—Y2K experts still giving interviews in January 2000.
  • Tool-Usage Pragmatism Feels Small: A culture that wants technology to be Prometheus or Pandora struggles with Swiss Army knife reality.
  • Learning-by-Doing Becomes Visible: Previous technological transitions happened largely out of public view. Now the awkward adolescence of major technologies plays out on social media, making normal iteration look like chaos.

The Pattern Beneath the Pattern

This reveals something larger about how change actually works. The stories that organise our thinking about transformation—the narrative attractors—aren’t neutral descriptions. They’re coordination mechanisms. They work by providing shared frameworks that let large groups act in concert despite disagreeing about details.

But when reality outpaces the stories, the coordination breaks down. Not into chaos, but into something more distributed and resilient: post-consensus coordination. Systems that work despite permanent disagreement about what they mean or who’s in charge.

You’ve seen this before, even if you didn’t recognise the pattern. The internet protocols that route your email don’t require global consensus about privacy, governance, or digital rights. They just work. Supply chains coordinate across incompatible legal systems, currencies, and cultural frameworks. Open source software gets built by people who disagree about everything except the code.

The revolution isn’t replacement of old systems with new ones. It’s the emergence of coordination mechanisms that function without requiring shared stories about what they’re doing or why they matter.

Recognition and Response

The next time you encounter institutional bafflement, expert whiplash, or reality refusing to fit available narratives, you’ll recognise what you’re seeing: not chaos, but reorganisation. Not the failure of sense-making, but its transformation.

The 1970s felt apocalyptic to people living through them—oil shocks, Watergate, stagflation, cultural upheaval. Everyone knew something fundamental had shifted, but faced competing futures: environmentalism, feminism, computing, globalisation all vying for dominance. But what looked like permanent crisis was just narrative reorganisation in progress. By the 1980s, a new equilibrium had emerged. The chaos wasn’t terminal—it was transitional.

Today feels overwhelming because we’re experiencing both patterns simultaneously: invisible revolution (like 1903) happening alongside visible narrative collapse (like 1973). We’re not just adopting new technologies—we’re in a moment where multiple possible civilisational paths seem available at once.

The real action isn’t in the grand pronouncements about what it all means. It’s in the practical adaptations happening beneath official notice. The boring work of figuring out how to use new tools, navigate new constraints, coordinate across new distances.

This is how revolutions actually happen. Not through the collapse of old stories and their replacement with new ones, but through the patient construction of systems that work despite our inability to agree on what we’re building or why it matters.

The future is being built by people who stopped waiting for the story to make sense and started solving problems instead. While we debate the meaning, they’re creating the infrastructure. While we argue about the map, they’re learning the territory.

The revolution is administrative. The transformation is mundane. The future is being assembled one practical adaptation at a time, by people who’ve discovered that working systems matter more than winning arguments about what those systems mean.

And perhaps that’s the most important insight of all: the stories that used to organise our thinking about change were never really about the change itself. They were about managing our anxiety in the face of uncertainty. When we stop needing the stories to make emotional sense, we can start building things that actually work.

The collapse of narrative attractors isn’t a crisis to be solved. It’s a constraint to be navigated. And the people navigating it best aren’t the ones debating what it means—they’re the ones discovering what becomes possible when you simply start solving.

  1. Clarke, Andy, and David Chalmers. “The Extended Mind.” Anaylsis 58, no. 1 (1998): 7–19. ↩︎
  2. Hutchins, Edwin. “How a Cockpit Remembers Its Speeds.” Cognitive Science 19, no. 3 (1995): 265–88. https://doi.org/10.1207/s15516709cog1903_1. ↩︎