There’s a fundamental tension between the top-down tool-to-work model foundational in economics and the bottom-up work-to-tool model we see across other disciplines—a tension that Mokyr’s recent Nobel highlights. The […]
Continue readingStop Comparing AI to Railroads. It’s More Like the Crypto Boom.
Another day, another article telling us the AI boom is just like the railroad buildout of the 1800s. “Don’t worry about the bubble—infrastructure always finds its users eventually!”
This is dangerously wrong. Here’s why.
Continue readingAI and the Art of the Mundane Breakthrough
The Economist piece on “What if artificial intelligence is just a “normal” technology?“, got me thinking about historical analogies and how we construct them.
Narayanan and Kapoor use factory electrification—a 30-year process requiring total rethinks of floor layouts and organizational structures. But this example has always felt like classic post-hoc sense-making: we see a transformation, find the “disruptive technology” preceding it, and assume causation.
A better analogy might be the PC. There wasn’t a 30-year lag waiting for “adoption.” What we saw was a broader wave of business process reengineering that crystallized around regulatory requirements like ERP. PCs participated in this transformation but didn’t cause it.
The difference matters for how we think about AI deployment. If it’s like electrification, we’re waiting for organizations to slowly restructure around the technology. If it’s like PCs, we’re watching AI get absorbed into larger systemic changes already underway—remote work acceleration, regulatory digitization, skill verification crises.
Continue readingThe Crooked Path
Why Breakthroughs Disappoint and Work Delivers
You know that feeling when you read about the latest “breakthrough” technology that’s going to change everything—fusion finally working, quantum computers achieving some new milestone, brain-computer interfaces getting closer to reality—and part of you feels excited but part of you thinks, haven’t I heard this before?
I’ve been carrying around a low-level disappointment about technology promises for years now. Remember when VR was going to transform everything? You bought into the hype, got a headset, used it enthusiastically for maybe two weeks, and now it’s gathering dust in a closet. Or self-driving cars: we’ve been perpetually “just a few years away” from full autonomy for over a decade now (and the current rollout still relies on an operations centre with remote drivers). Blockchain was going to revolutionise everything from voting to supply chains, but mostly it revolutionised speculation and energy consumption.
This got me wondering: why does this keep happening?
Continue readingWorld Models and the Anchoring Problem
The AI research community is having another moment with world models. Quanta Magazine’s recent piece traces the concept back to Kenneth Craik’s 1943 insight about organisms carrying “small-scale models” of reality in their heads. Now, with LLMs showing unexpected capabilities, researchers are betting that better world models might be the key to AGI.
But there’s something revealing happening in this revival. The more we try to find coherent world models in current AI systems, the more we discover what researchers call “bags of heuristics”—disconnected rules that work in aggregate but don’t form unified understanding. When MIT researchers tested an LLM that could navigate Manhattan perfectly, it collapsed when just 1% of streets were barricaded at random. No coherent map, just an elaborate patchwork of corner-by-corner shortcuts.
This raises the grounding question: what keeps intelligence honest?
Continue readingThe AI Productivity Paradox
The recent decline in entry-level employment is not a consequence of AI’s revolutionary power. Instead, it is a symptom of a maturing industry entering a low-growth phase. AI is serving as a tool for cost optimisation rather than a driver of new value creation.
If generative AI were truly a general-purpose technology on par with electricity or the micro-processor, the macro data should already be unmistakable. Instead, the silence is deafening.
Continue readingThe Tyranny of the Ideal
There’s a persistent belief that policy failures happen because politicians ignore expert advice. Healthcare reform gets watered down by special interests. Climate action stalls because fossil fuel lobbies block rational carbon pricing. Immigration reform collapses because extremists prevent sensible comprehensive solutions.
The real story is more unsettling: these policies are failing precisely because politicians are following expert advice. The most sophisticated policy frameworks, implemented exactly as designed, produce the most predictable disasters. Our smartest people aren’t being ignored—they’re being followed faithfully toward systemic failure.
Continue readingThe Intelligent Hand
Why Richard Sennett’s The Craftsman Explains Our Current Expertise Crisis
Why do expert predictions keep failing while practical adaptations keep succeeding?
I’ve been tracking this pattern across domains—AI researchers confident about artificial general intelligence while consultants quietly discover ChatGPT helps structure client presentations; fusion physicists announcing breakthroughs while the technology remains perpetually “almost ready”; policy experts debating digital transformation frameworks while small businesses just start using whatever tools solve Tuesday’s problems.
The disconnect isn’t accidental. It reveals something fundamental about how knowledge actually develops versus how we think it should. And Richard Sennett’s The Craftsman, published in 2008, provides the clearest framework I’ve found for understanding why this split keeps widening—and why it matters more than we realize.
Continue readingThe Collapse of Narrative Attractors
Watching the cathedral of certainty crumble while the rest of us quietly bolt the next floor on
You’ve felt this. The same people who promised social media would democratise information now warn it’s destroying democracy. The same voices who said smartphones would liberate us from our desks now fret about screen addiction. The experts who assured us globalisation would lift all boats are now explaining why supply chains are fragile and manufacturing should come home.
It’s not just that they were wrong—it’s the whiplash. The confident certainty followed by equally confident reversals, as if the previous position never existed. As if the complexity was always obvious to anyone paying attention.
But here’s what’s actually happening: you’re watching narrative attractors collapse in real time.
Continue readingWhy AGI Isn’t a Compute Problem
Just read this fascinating piece on why we might be hitting AGI limits sooner than expected: The Road to Declining Marginal Intelligence.
The author, Andrew Cote, argues that AI performance follows logarithmic scaling with compute—meaning constantly declining returns on investment. GPT-5’s underwhelming performance compared to competitors seems to confirm we’re entering the “choppy waters” phase.
But this connects to something I’ve been thinking about in (most recently in AI Writing Tools and Cognitive Decline: The Wrong Question?). The piece suggests LLMs hit limits because they can’t become “smarter than their environment”, which is human knowledge.
I think this points to something deeper: intelligence isn’t something we HAVE, but something we DO in relationship with our environment. The constraint isn’t computational: it’s environmental coupling.
Continue reading