Why every prediction about AI and work is already wrong

In 2023 Felten et al1 took a snapshot of work-as-described and asked which bits looked like language tasks. It aged badly because the snapshot mistook the current frame for a stable target. A 2026 NBER chaining paper2 is more sophisticated—it sees that adjacency and sequence matter, not just task content—but it still assumes the step structure is given and stable enough to reason about. It too will age badly. Both use deductive models of a system that evolves faster than the models can be validated.

The deeper problem isn’t methodological, it’s ontological. ‘Task’ is neither atomic nor stable. Work isn’t a list of things people do—it’s a socio-technical system under tension, and what counts as a task is partly an artefact of the current configuration of that system. When the configuration shifts, our categories shift with it. You can’t photograph a river.

We’re not just in another technological transition. We’re in one where the cognitive infrastructure available to navigate change—the interpretive frameworks, the cross-disciplinary connections, the sheer density of collective sense-making—is itself unprecedented. More concepts, shorter distances between ideas, faster recombination. Which sounds optimistic but has a sting: if recombination outpaces experience, we’re operating with more unprocessed novelty than our institutions know how to absorb. The anxiety about AI that exceeds any specific fear about jobs or safety may reflect exactly this—not that the future is uncertain, which it always is, but that the usual process by which uncertainty becomes manageable is running behind the rate of change.

And LLMs are a precise illustration of the bind. They navigate accumulated human knowledge with extraordinary fluency. They cannot rupture frames. At precisely the moment when recombination is fastest and institutions most need to find genuinely new configurations, the most powerful cognitive tools we’re deploying are structurally oriented toward the already-known.

The question isn’t which jobs survive. It’s whether we can develop the institutional and cognitive capacity to function in a system that’s reorganising faster than it can make sense of itself. So far, the answer is not obviously yes.

  1. Felten, Ed, Manav Raj, and Robert Seamans. “How Will Language Modelers like ChatGPT Affect Occupations and Industries?” arXiv:2303.01157. Preprint, arXiv, March 18, 2023. https://doi.org/10.48550/arXiv.2303.01157. ↩︎
  2. Demirer, Mert, John J. Horton, Nicole Immorlica, Brendan Lucier, and Peyman Shahidi. “Chaining Tasks, Redefining Work: A Theory of AI Automation.” Working Paper No. 34859. Working Paper Series. National Bureau of Economic Research, February 2026. https://doi.org/10.3386/w34859. ↩︎