The Economist piece on “What if artificial intelligence is just a “normal” technology?“1, got me thinking about historical analogies and how we construct them.
Narayanan and Kapoor use factory electrification—a 30-year process requiring total rethinks of floor layouts and organizational structures. But this example has always felt like classic post-hoc sense-making: we see a transformation, find the “disruptive technology” preceding it, and assume causation.
A better analogy might be the PC. There wasn’t a 30-year lag waiting for “adoption.” What we saw was a broader wave of business process reengineering that crystallized around regulatory requirements like ERP. PCs participated in this transformation but didn’t cause it.
The difference matters for how we think about AI deployment. If it’s like electrification, we’re waiting for organizations to slowly restructure around the technology. If it’s like PCs, we’re watching AI get absorbed into larger systemic changes already underway—remote work acceleration, regulatory digitization, skill verification crises.
This connects to David Landes’ observation in The Unbound Prometheus:2 the small, boring increments often mattered more than the headline technologies. The real Industrial Revolution wasn’t just steam engines—it was better lubrication, standardized screws, improved metallurgy. The unglamorous stuff that made the glamorous stuff actually work.
With AI, we’re obsessing over foundation models while the real action might be in prompt engineering folklore, fine-tuning workflows, and the mundane integrations happening in every office. The paralegal who’s jailbroken ChatGPT for discovery motions isn’t adopting “AI”—she’s solving a problem with whatever works.
Here’s what’s interesting: LLMs might follow normal technology adoption curves while enabling something genuinely unprecedented. For the first time, we can manipulate the large-scale statistical structure of language itself. Language as sedimented experience, intelligence as interpretive practice—suddenly workable at the scale of entire linguistic traditions.
Maybe the real story isn’t “normal vs. revolutionary” but “normal adoption patterns enabling genuinely new forms of human-language interaction that we’re still discovering.”
The radical part isn’t the AI becoming intelligent. It’s what becomes possible when humans can work with meaning at previously impossible scales.
Still thinking through this, but the PC analogy feels more generative than factory floors. What historical parallels are you finding useful?
- The Economist. “What If Artificial Intelligence Is Just a ‘Normal’ Technology?” September 6, 2025. https://www.economist.com/finance-and-economics/2025/09/04/what-if-artificial-intelligence-is-just-a-normal-technology. ↩︎
- For example, when discussing textile manufacture pp 87. “The Industrial Revolution in Britain” in Landes, David S. The Unbound Prometheus: Technological Change and Industrial Development in Western Europe from 1750 to the Present. 2nd ed. Cambridge University Press, 2003. ↩︎