Blockchain performance might always suck, but that’s not a problem

I’ve been watching the Bitcoin scaling debate with some amusement, given that my technical background is in distributed AI and operational simulation (with some VR for good measure). Repeatedly explaining blockchain’s limitations to colleagues has worn thin so I’ve posted a survey of the various scaling approaches on the Deloitte blog,[ref]Peter Evans-Greenwood (5 May 2016), Blockchain performance might always suck, but that’s not a problem, Deloitte Australia blog. Available at <http://blog.deloitte.com.au/greendot/2016/05/05/blockchain-performance-sucks-not-problem/>[/ref] pointing out why they won’t deliver – either separately or together – the 10,000 time improvement everyone is wishing for, and why this is not a problem. This post is the short version, one not intended for the general audience of the Deloitte blog has.

Distributed systems are not a familiar topic for many folk, and experience with n-tier enterprise, web or mobile solutions doesn’t translate. However, many of the scaling proposals aren’t much more than transposing a scaling technique from the web or database world into blockchain. Sharding, for example, might help you scale a database, but it’ll give you a sub-linear improvement at best on something like Bitcoin. Another good one is the assumption that Moore’s Law will enable us to continue increasing transaction through-put, but network performance doesn’t follow Moore’s Law.[ref]Martin Geddes, Five reasons why there is no Moore’s Law for networks. Available at <http://www.martingeddes.com/think-tank/five-reasons-moores-law-networks/>[/ref]

Bitcoin and blockchain scalability boils down to two things: communication limitations and the consistency guarantee.

Dealing with communication limitations is fairly straight forward.

We can play with parameters (block size, and the time between blocks), or mess about with how we define transactions to make them smaller (Segregated Witness) and squish more transactions in a block. This might get us a one-time increase of a factor of 4-10.

We can reduce the volume of transactions via one of the many micropayment proposals. The most interesting of these is Lightning Network,[ref]Lightning Network. Available at <https://lightning.network/>[/ref] as it enables payments between micropayment channels. The impact of these is vastly overestimated though, as few people will put all their working capital into an unstable currency nor will they lock up this capital for a year. They also add complexity to an already complex platform.

Messing with the consistency guarantee is more complex.

Bitcoin solves the “double spend” problem with double entry accounting, and it ensures the integrity of the accounting system by guaranteeing that all transactions are globally unique and partially ordered. A “new” transaction must not be a duplicate of a transaction in any prior block, nor may it be a duplicate of a transaction already in the current block. For this to work somehow we must to look at every pair of transactions to determine that all transactions are unique.

This point seems frequently misunderstood, as you can see in this quote from one of the many Bitcoin scalability white papers and which is representative of the general trend:

The problem of simulatenously achieving the best of both worlds: having only a small portion of consensus participants explicitly participate in validating each transaction, but have the entire weight of the network implicitly stand behind each one, has proven surprisingly hard.[ref]Vitalik Buterin (31 May 2015), Notes on Scalable Blockchain Protocols. Available at <https://github.com/vbuterin/scalability_paper/blob/master/scalability.pdf>[/ref]

Well, no. If we want to provide a guarantee of each transaction being globally unique then we must inspect every transaction. There’s nothing “implicit” about this. And if we weaken this guarantee (such as only ensuring that transaction are locally unique) then we break Bitcoin.

There’s a bunch of proposals, from “tree chains” to sharding, that try and break up the consensus space in some way, only to discover that they need to create a lot more complexity to maintain the constancy guarantee. The end result is a sub-linear improvement at best while increasing cost per block/transaction due to the extra mining to support the additional consistency infrastructure.

The only paper I’ve seen so far that takes a sound approach to the problem is On Scaling Decentralized Blockchains.[ref]K. Croman, C. Decker, I. Eyal, A.E. Gencer, A. Juels, A. Kosba, A. Miller, P. Saxena, E. Shi, E. G. Sirer, D. Song, and R. Wattenhofer. On Scaling Decentralized Blockchains (A Position Paper). BITCOIN’16. Available at <http://initc3.org/scalingblockchain/full.pdf>[/ref] Unfortunately this paper isn’t getting much traction, probably as it points out that there is no silver bullet that will deliver a 10,000 times performance improvement, and any significant improvement will involve a lot of work and significant amount to change to the blockchain, change what will not be backwards compatible (i.e. a hard fork).

As I point out in the long blog post:

We can easily do a lot better than Bitcoin’s few transactions per second, but the only parameters we have to play with are dwell time, block size and the strength of the consistency guarantee. Assuming that some smart person is just going to walk in and solve this problem is hubris.

Image: Zach Copley.