The Collapse of Narrative Attractors

Watching the cathedral of certainty crumble while the rest of us quietly bolt the next floor on

You’ve felt this. The same people who promised social media would democratise information now warn it’s destroying democracy. The same voices who said smartphones would liberate us from our desks now fret about screen addiction. The experts who assured us globalisation would lift all boats are now explaining why supply chains are fragile and manufacturing should come home.

It’s not just that they were wrong—it’s the whiplash. The confident certainty followed by equally confident reversals, as if the previous position never existed. As if the complexity was always obvious to anyone paying attention.

But here’s what’s actually happening: you’re watching narrative attractors collapse in real time.

Continue reading

Why AGI Isn’t a Compute Problem

Just read this fascinating piece on why we might be hitting AGI limits sooner than expected: The Road to Declining Marginal Intelligence.

The author, Andrew Cote, argues that AI performance follows logarithmic scaling with compute—meaning constantly declining returns on investment. GPT-5’s underwhelming performance compared to competitors seems to confirm we’re entering the “choppy waters” phase.

But this connects to something I’ve been thinking about in (most recently in AI Writing Tools and Cognitive Decline: The Wrong Question?). The piece suggests LLMs hit limits because they can’t become “smarter than their environment”, which is human knowledge.

I think this points to something deeper: intelligence isn’t something we HAVE, but something we DO in relationship with our environment. The constraint isn’t computational: it’s environmental coupling.

Continue reading

What Cells “Remember” Tells Us More About Us Than About Them

The July 30 article in Quanta, What Can a Cell Remember?, reports on a striking experimental finding: kidney cells grown in a dish appear to “anticipate” regularly spaced pulses of chemical signals. When the pulses are paused and then resumed, the cells respond differently—suggesting, at first glance, that they “remember” what came before.

It’s a compelling observation. But the interpretation it invites—framed in terms of “memory”—raises some interesting questions about how we make sense of biological systems, especially in an era where cognitive metaphors are everywhere.

Continue reading

Crabgrass Frontier

There’s a scene in Who Framed Roger Rabbit where Judge Doom lays out his plan to dismantle the trolley system and replace it with freeways. It’s intended to be cartoonishly evil, but the idea feels all to familiar. The demise of the streetcar and the rise of the car-centric suburb have long been framed as a conspiracy: businessmen colluding to kill transit, sell tires, and pave the future. Like many myths, there’s a sliver of truth. But the full story is both more mundane and more revealing.

Kenneth T. Jackson’s Crabgrass Frontier quietly dismantles these comforting narratives. Published in 1985, it remains one of the clearest accounts of how American suburbia was not the outcome of technological inevitability or malicious forces in society, but a product of consumer preference (a desire to find privacy through space) and policy design, shaped by incentives, subsidies, zoning, and a particular vision of the good life. In Jackson’s telling, suburbia wasn’t chosen. It was made.

Continue reading

Rewired

Much of today’s business writing is reductionist, focused on clean cause-and-effect narratives. This isn’t a flaw; most of the time, what organisations need is tactical advice: if you have X, do Y. Rewired is a strong example of this genre, offering a practical guide to how contemporary organisations structure, run, and deliver technology.

But we’re not in a steady state. We’re at the end of an era shaped by firm-centric efficiency, and entering one defined by networked coordination, contested data, and shifting boundaries of control.

We’re living through a transition, away from the familiar paradigms of the last 30 years, and toward something still taking shape. In that context, advice grounded in what worked before may be increasingly ill-suited to what comes next. The book excels at guiding firms through internal change, but falters when the real challenge is how firms relate to everything outside them. Rewired is useful for what it is. But how useful that is, right now, is an open question.

Continue reading

How Systems Evolve After Legitimacy Fails

Media still publishes. Science still tests hypotheses. Consultants still give advice. Universities still confer degrees. Doctors still diagnose.

But none of these institutions command authority like they used to.

Their outputs still circulate, but the performances that once legitimated those outputs—peer review, op-eds, credentials, protocols—no longer land with the same force. We use the infrastructure, but we’ve stopped believing in the ritual.

This is not a collapse of function. It’s a collapse of legitimacy. And it is reconfiguring our systems in profound ways.

Continue reading

Fluency Without Thought: New Evidence for the LLM Productivity Trap

A recent academic paper—Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing—offers compelling empirical evidence for a claim I’ve been exploring: that LLMs are reshaping knowledge work in ways that increase surface fluency while weakening deeper forms of cognitive engagement.

Continue reading

Why We Keep Misreading Disruption

We’re wired to spot disruption in the wrong places—chasing the latest AI feature or platform, expecting it to upend everything overnight. Google’s new ‘Shop with AI’ mode is already stirring such claims, but as my latest Substack essay Why We Keep Misreading Disruption explores the real question isn’t what these technologies do, but what deeper systemic shifts they reveal. This piece unpacks why our visions of the future often miss the mark, how globalisation’s story helps explain structural change, and what it means to see disruption as a ‘punctuation’ rather than just incremental progress.

Continue reading

Prediction Without Disruption

The recent Stanford paper on Outcome-based Reinforcement Learning to Predict the Future1 (RLVR) could be seen as both a product of and a contributor to the cycle of misinterpreting disruption, as I discussed in Why We Keep Misreading Disruption.2 It’s advancing tools that improve prediction without necessarily addressing or understanding the foundational shifts that disruption entails.

Continue reading