Casper CBC, UTXO accumulators, and proof-of-work security
|Rhys Lindmark||Dec 7, 2018|| 21|
Welcome to Cryptocurrency Research Review, Issue No. 1! This biweekly publication (curated by the MIT Media Lab’s Digital Currency Initiative in partnership with MIT Press) crowdsources high-quality reviews of interdisciplinary research from the fields of cryptocurrency and blockchain technology. We intend this publication to help surface impactful work in the space, provide more neutral analysis of ongoing research, and to encourage interdisciplinary discussions in the community.
This is an experimental effort to figure out how best to meet the needs of cryptocurrency and blockchain researchers, so please share your feedback with us. You can use the links included at the end to submit your feedback or feel free to respond to this email directly. We hope you enjoy this first issue. Let’s get right to it!
Paper by Zamfir, et al. Nov 5, 2018
Ethereum Research recently released a paper called “Introducing the “Minimal CBC Casper” Family of Consensus Protocols”. Ethereum scalability research is an important topic as the network experienced severe scalability problems on the production network in 2017. These were due to fundamental design issues in Ethereum itself (as predicted in 2016 for example).
A close examination of the CBC Casper paper reveals a major shortcoming that calls into question whether the paper contributes to the material advancement of working consensus protocols and scalability efforts.
Liveness and safety are inseparable:
Specifically, the paper attempts to treat and provide Byzantine safety without respect to liveness. However, liveness and safety are inseparable properties of Byzantine fault-tolerant protocols for achieving consensus. Because of this, we believe that the approach presented in this paper is fundamentally the wrong approach to start the design of consensus protocols.
Correctness is a guarantee that once a decision has been reached, it will remain decided. Liveness is a guarantee that a protocol will do something useful — i.e., process transactions — even in the face of failures. Without the liveness guarantee, users cannot actually do useful things with the protocol, because they do not know when they can consider that transactions have been processed successfully. For example, in Bitcoin, users need to know when they can consider a transaction confirmed.
This paper attempts to prove correctness without considering liveness. This is a problem because proving correctness without liveness is not practically useful, and most of the hard problems show up precisely when you consider liveness — treating 1 out of 2 doesn’t get you 50% of the way to a full protocol; it leaves you with almost nothing.
The proofs in the paper are useful to the extent an empty set of properties is useful. An empty set of properties might be “safe” but makes no progress towards having a practical protocol.
The paper’s definition of Byzantine fault tolerance as “BFT safety but without liveness and only for equivocation faults” is unconventional and, in our view, fundamentally the wrong approach to start the design of consensus protocols.
[Check out the full review here.]
Review by Pieter Wuille
Much has been written about Bitcoin's scalability, frequently regarding the growth of the blockchain in function of the rate of transactions on the network. But perhaps more concerning in the long term is the growth of the dataset needed for validation , called the UTXO set in Bitcoin. In the current protocol, every fully validating node needs to at least know which outputs of previous payments haven't been spent from yet, in order to detect attempts to double spend. While blockchain data is only accessed sequentially and can be provided by untrusted hosting, the UTXO set needs fast random access, and cannot be outsourced as its integrity is critical for security. Furthermore, there are very few economic incentives to curb its growth.
A radical solution to this problem was described as early as 2012 . By replacing the UTXO set with a continuously-updated commitment to the UTXO set, and including proof data in every transaction that its inputs are in fact included in this commitment, the trusted storage needed by validation nodes is reduced to a constant, categorically removing the UTXO growth concern. In practice, the design space for this class of solutions is large, and while many approaches have been discussed    , little work has been done to actually analyze the various trade-offs and optimize a concrete design.
The Utreexo  work focuses on building a practical protocol to accomplish this using Merkle trees, including the non-negligible constant factor increases in bandwidth to transmit the proofs, as well as the inevitable compatibility that will be needed for deployment in the form of bridge nodes. Longer term, and so far only hypothetically, a solution with an even better trade-off may be possible through the use of cryptographic accumulators. Recent work  has removed several of the obstacles, such as verification speed and the ability to merge proofs, but making the mass construction of proofs tractable remains an open research topic.
Paper by Eric Budish, University of Chicago, June 5, 2018.
Review by MIT DCI based on conversations at the Security of Proof-of-Work Workshop
Budish’s paper examines the potential costs and gains of double spending and rigorously analyzes the security model of proof-of-work. At a high level, Budish argues that in order to incentivize miners not to perform a 51% attack, the decentralized consensus enabled by proof-of-work must be very expensive to coin holders (in the form of inflation) and/or those transacting (in the form of fees). Bitcoin’s proof-of-work security model “requires that the flow cost of running the blockchain is large relative to the one-shot value of attacking it.” In other words, we need to continuously pay large “per block” costs in order to protect against the potential of a one-shot double spend attack.
Although Budish’s model lays a powerful foundation, it is a simplified one and we foresee a variety of possible revisions to the model: incorporating layer 2, determining the convexities of cost in acquiring 51% hashpower to conduct an attack, and considering the game theoretic response of miners, users, and exchanges after such an attack, who might choose to respond to the attack by temporarily using strategies other than the default of following the heaviest chain. However, security models like this one are very useful and will have a large impact on understanding the nature of the security of cryptocurrency. In Budish’s words, his model “suggests that Bitcoin would be majority attacked if it became sufficiently economically important...which suggests that there are intrinsic economic limits to how economically important it can become in the first place.” We’re interested to see if this holds true!
Other Great Curation
The Stanford Journal of Blockchain Law & Policy is the “first law journal to publish on the greater blockchain technology space.”
Talks from Devcon4 are here!
Real World Crypto, Jan 9-11, San Jose
Stanford Blockchain Conference (formerly BPase), Jan 30-Feb 1, Stanford
Financial Cryptography and Data Security, Feb 18-22, St. Kitts
Cryptocurrency Research Community Notes
Join our Telegram channel here for discussion on Cryptocurrency Research!
If you’d like to submit your own review, please do so here!
If you’d like to see or contribute to Cryptocurrency Research Review’s mission and values, please do so here!
If you have any feedback, please just respond to this email directly. We’d love to hear from you.
Disclaimer: All content is for informational purposes only and not intended as investment advice.