Forever ten years away

Why do some the technologies always seem to be ten years away? We’re not talking about the science fiction dreaming of faster than light travel or general AI and the singularity. Those ten years apply to technologies that forever seem to be just out of reach, just beyond our current technical capabilities, like nuclear fusion (as opposed to fission) or quantum computing. Researchers make incremental progress and we’re told that (once the technology works) its going to change everything, but despite this incremental progress estimates of when the technology will be commercialised and so available to the public always seem to be in the ballpark of ‘ten years’.

The problem is that we’ve put the cart before the horse. It’s common—nearly universal—to assume that the process of technology development is along the following lines:

Observation (of some phenomena)
☞ (Pure) Research (of the phenomena)
☞ Applied (research) (finding potential applications)
☞ Commercialisation (profit!)

A lot of government policy and funding is focused on improving this flow, either by increasing investment in pure research to juice the rate at which observations are translated into ‘science’, or by trying to close the commercialisation gap between ‘research’ and ‘profit!’ (Australia, by the way, is considered to be quite good at the research thing, world leading in a number of areas, but poor at bridging the commercialisation gap to profit.)

The problem is that technology development rarely (if ever) works this way. To get all philosophical for a moment, the relevant Kranzberg Law is his Second:

Invention is the mother of necessity.

And I expect you just read that the wrong way around.

Innovations depend on many things, and the ‘big idea’ is typically less important that the many small ideas need to bring the big idea to life. The Industrial Revolution, for example, is more a story of how the puddling furnace and leadscrew enabled us to create early machine tools, than a story about the steam engine per se. We’d been playing with steam engines forever and the idea of steam power can be traced back to somewhere between 15 and 30 BC, but the techniques available limited what we could build. It was only with the development of cheap steel and machine tools (i.e. precision fabrication) that old ideas could finally be realised, allowing us to move forward. Finally, once modern steam engines were driving industry forward, scientists developed thermodynamics (the science of steam) to better understand their operation.

It’s obvious, when you think about it, that Watt was an instrument maker working in a machine shop, Stephenson started out as mining engineer, Newcomen was an ironmonger, &c. Other big innovations follow this template, emerging from messy learning by doing from people embedded in industry where they were incrementally solving problems. The vast majority of the technology we use comes out of praxis, not research. Understanding comes later via science.

Both fusion and quantum are attempting to buck this trend, with proponents calling for major investments as they’ll change everything, and they’re only ten years away. What these proponents fail to acknowledge is that success depends on many otherwise boring factors that they choose to ignore, and not the core idea.

The recent fusion breakeven announcement is a case in point. The US Energy Secretary said that the discovery will ‘go down in the history books’, while various scientists say there could be ramifications for climate change and energy security. At the press conference the lead researcher claimed that (paraphrasing) ‘fusion works, we’ve done our bit, the rest is just engineering’. All this hubris ignores the fact that the engineering is more important than the science. The break-even they demonstrated was for laser energy in and radiation emitted and was only marginal at best. We still need to capture that radiation and convert it into electrical power, and then make the entire end-to-end system (from sourcing and processing fuel through powering the reaction to capturing and transmitting the excess energy) efficient enough to bother with. While it’s very likely that fusion is possible, it’s not clear that it will be practical.

If we do make fusion practical, it’s also likely to be a lot more limited that we expect. Fission is a good example of this dynamic, where early developments in reactors for submarines had many experts anticipating nuclear powered planes, trains, and automobiles, or even home power. This never happened as while we could scale the nuclear fission reaction down, we couldn’t scale down the shielding (at the time). The Russians has a nuclear train that left the track radioactive due to insufficient shielding (the U.S. didn’t get past a feasibility study), and a nuclear plane that had shielding problems that necessitated the pilots wearing radiation suits. Ford started development of a nuclear powered car, the Ford Nucleon, but didn’t get past concept drawings.

It’s a similar story with quantum computing. Quantum is a particular solution to particular computing problems, not a ‘computer that tries all possibilities in parallel’. Nor does anyone have a good take on what these particular problems look like or where we might find them. Moreover, many of the problems that quantum was thought to be perfect for already have good enough solutions, which means that better solutions won’t make much (if any) of an impact. Take the traveling salesman problem. Even if quantum lets us compute an optimal route (as opposed to a workable route), it won’t affect overall performance as the quality of the solution is swamped by randomness soon after the solution is computed (your route didn’t factor in that fire or resulting traffic jam). The most likely future for quantum is that we make it sort-of work and find practical uses in a few niche applications. It’s unlikely to justify the research investment though.

Bringing an idea to life (invention) depends on many necessities that must be addressed: Kranzberg’s ‘invention is the mother of necessity’. Steam engines relied on machine tools, which relied on lead screws, which relied on cheap steal. Similarly, the Manhattan Project was as much, if not more so, the child of engineering as science. J. Robert Oppenheimer and his team of scientists made an important contribution by leading the fifty million dollar applied research effort to outline a possible bomb, but it was Major General Leslie Groves who oversaw the two-billion-dollar programme to source the materials, refine the uranium, and manufacture the bomb and its delivery system, to address the many necessities without which the bomb would have been impossible.

It’s a long journey from ‘idea’ or ‘theoretically possible’ to ‘practical’, and just because we can imagine a thing doesn’t mean you can make it. Transforming idea into product relies on us addressing the many little necessities without which the idea will remain merely an idea, a flying car, hyperloop, or even forever ten years away. Both fusion and quantum are interesting ideas that may never be practical. And like hyperloop, an idea from the 1700s which remains unrealised due to switching and safety challenges, the amount of research funding they attract is not a good indicator of their future potential. We’re also at a point in time where many of the discoveries in materials &c are in the margins, so we can’t assume that we’ll sort out the “new puddling furnace” that suddenly takes an idea from ‘theoretically possible’ to ‘practical’.