# MIT DCI's Cryptocurrency Research Review #2

### RSA accumulators, Bitcoin modeling, and more economics of proof-of-work

Rhys Lindmark | Feb 1, 2019 | 3 |

Hello!

Welcome to Cryptocurrency Research Review, Issue No. 2! This monthly publication (curated by the MIT Media Lab’s Digital Currency Initiative in partnership with MIT Press) crowdsources high-quality reviews of interdisciplinary research from the fields of cryptocurrency and blockchain technology. We intend this publication to help surface impactful work in the space, provide more neutral analysis of ongoing research, and to encourage interdisciplinary discussions in the community.

This is an experimental effort to figure out how best to meet the needs of cryptocurrency and blockchain technology researchers, so please share your feedback with us. You can use the links included at the end to submit your feedback or feel free to respond to this email directly. **We are also looking to hire a managing editor to help us develop this journal. If you’re interested in joining our team, visit the official job posting for more information!**We hope you enjoy this issue. Let’s get right to it!

# Review of Batching Techniques for Accumulators with Applications to IOPs and Stateless Blockchains

**Paper by Dan Boneh, Benedikt Bünz and Ben Fisch. Dec 11, 2018**

**Review by Georgios Konstantopoulos, Lead Researcher at Loom Network**

The Stanford Applied Cryptography Group recently released a paper called *"Batching Techniques for Accumulators with Applications to IOPs and Stateless Blockchains"*. The paper introduces cryptographic techniques for non-interactively creating membership and non-membership proofs for large numbers of elements in a dataset, with constant size. These proofs can be utilized for the scalability of blockchains by enabling more efficient UTXO commitments, stateless blockchains and shorter interactive oracle proofs.

The authors first build on the recent work by Wesolowski for multiple proof of exponentiation schemes that allow a Verifier to efficiently check that a Prover has calculated (or knows) the discrete logarithm of a number w with another number u as a base (ie. know x such that u^x = w). The proposed schemes are important building blocks for the techniques that follow in the paper as well as in Verifiable Delay Functions, which the authors evaluated in another paper.

A notable requirement for the security of the protocols is that the Verifier chooses a large number without an easy-to-guess prime factor that is used as a “challenge”, to ensure an attacker cannot fool the Verifier about the validity of a false proof. A hash-to-prime function is used for the non-interactive versions of the protocols, so that the Prover can generate the challenge without having to receive it first from the Verifier. The authors present a hash-to-prime generator from a collision resistant hashing function, however it incurs significant overhead to the verification process. This can be reduced by modifying each protocol accordingly (cf. Section 6 of the paper).

They use the aforementioned schemes to create an universal RSA Accumulator. Similar to a Merkle Tree, an accumulator can be used to prove membership of an element in a list. Their accumulator supports aggregation of membership proofs into constant size proofs, and verifying multiple proofs in batches. It also supports non-membership proofs that can be batch verified, but cannot be efficiently aggregated. Proving exclusion of multiple pieces of data from a dataset cannot be done with a succinct proof (the aggregated proof has approximately the same size as all the individual proofs together).

RSA Accumulators can be used to create more efficient UTXO commitments, as covered in the previous issue. The accumulator can be updated efficiently for multiple elements and since inclusion proofs can be aggregated and batched, the communication and verification overhead is much smaller. Creating a stateless blockchain is a compelling use case of RSA Accumulators, as clients can fully validate the blockchain with much less data, simply by validating the state of the accumulator. However, a valid argument against stateless blockchains, as criticized before, is that a “witness-storage” market will emerge, where users will delegate storage of their proofs to third parties. Finally, accumulators are being investigated as a potential solution towards UTXO-history compaction in Ethereum Plasma constructions, where it already is the responsibility of users to manage their witnesses, which are currently Merkle proofs. Plasma operates under a threat model where data availability cannot be guaranteed, so if users lose their witnesses and blocks are withheld, the user cannot regenerate lost witnesses since they cannot download the necessary blocks. This puts additional responsibility on users to properly manage storage of their witness proofs.

A notable weakness of the RSA Accumulator is its reliance on a trusted setup phase during initialization, because it requires generating a large number with unknown factorization. The idea is that you have two large random numbers, you multiply them and publish the result. You then need to prove that you no longer know these two numbers. A solution to this problem is to use the RSA-2048 number (which has a $200,000 bounty for its factorization), or perform a secure multiparty computation. In this case, you either need to trust that RSA Laboratories does not know the factorization of RSA-2048, or that the parties who participated in the multiparty computation did not collude to retrieve the factorization of the generated number. A very promising solution to trustlessly generating such a number with unknown factorization is via a mathematical primitive called class groups (not further investigated in the paper). Another weakness of the technique is that all accumulated elements must be prime numbers, otherwise proofs can be faked. This can be done by utilizing the hash-to-prime function used in the proof of exponentiation protocols.

Furthermore, the authors propose a state of the art Vector Commitment (VC) scheme, which has constant size “openings” (proof that the *i*th committed value is the *i*th element in the dataset), and is the first of its kind to also have constant size public parameters. A Vector Commitment is position binding, meaning it does not allow an attacker to “open” a commitment to two different values at the same position. Contrary to Merkle Trees, VC openings can be batched, which can be used to create short proofs for multiple openings, greatly improving the communication overhead between a prover and a verifier.

The primary use case for VCs is in Probabilistically Checkable Proofs (PCPs), where the Verifier queries the Prover repeatedly about a subset of a long proof, and gets increasingly convinced as more of its queries return valid proofs. With a VC, the Prover can send a batched opening proof for multiple elements, resulting in a strictly more efficient protocol. The main downside of this technique is the witness generation overhead on the prover’s side, since group element operations are much more expensive than hashing, and that it is not quantum-secure.

Concluding, the authors developed novel Vector Commitment and RSA Accumulator techniques, with significant efficiency gains compared to previous protocols, with respect to communication overhead, setup, and Verifier complexity. There is future research to be done in the field of class groups to remove the required trusted setup, and in the aggregation of accumulator non-membership proofs. Unfortunately, due to the strong RSA-assumption, the proposed accumulators are not quantum-secure. There is also no actual comparison to other techniques such as hash-based accumulators (e.g Utreexo) or pairing-based accumulators. To sum up, the paper is well written, with intuitive examples and proofs, as well as applied use-cases for all the described primitives, which we should expect to see more of in the future.

[For a more in-depth technical explanation, see “A Deep Dive on RSA Accumulators”]

# Review of The Bitcoin Backbone Protocol: Analysis and Applications

**Paper by Juan A.Garay, Aggelos Kiayias and Nikos Leonardos. Dec 19, 2018**

**Review by Haseeb Qureshi, General Partner at Metastable Capital**

In the words of Jameson Lopp, *"the Bitcoin protocol does not — and arguably cannot — have a formal specification, because no one has the authority to write one."* Satoshi's original whitepaper, while elegant and succinct, only sketches a rough outline of the Bitcoin algorithm. This lack of a specification has stymied many attempts to formally analyze Bitcoin's guarantees.

A canonical starting place when analyzing a complex system like Bitcoin is to build a simplified model. One can construct an idealized environment and describe an algorithm that simulates each actor in the system. Then one can repeatedly increment a global "tick" function that advances the simulation, allowing each actor to take a step in their respective algorithms. Armed with this model, we can prove some properties of the idealized system it describes.

The Bitcoin Backbone Protocol, first published in 2014, is the first formal model of the Bitcoin network. The paper provides a description of an algorithm which simulates miners in a synchronous network, each working to extend a proof of work (PoW)-based chain. Their model is agnostic to the specific validity rules of the blockchain and can be used to analyze any system based on Nakamoto Consensus, be it Bitcoin, or other applications such as Ethereum or Namecoin.

Using the Bitcoin Backbone Protocol as a formal model, we can pinpoint the critical parameters that contribute to Bitcoin's security. The first important parameter they point out is the proportion of attacker-controlled hashrate. The other, less obvious parameter is the expected number of PoW solutions per round of communication. Some of the authors further build on this analysis and explore the inherent tradeoffs between block time and security in the paper Speed-Security Tradeoffs in Blockchain Protocols, where they use the Bitcoin Backbone Protocol to substantiate their analysis. Pass, Seeman, and shelat build further off this work, extending the analysis of Nakamoto Consensus to a semi-synchronous network (Garay, Kiayias, and Leonardos incorporate some of their analysis into a later version of the paper).

Many properties that have long been informally understood about Nakamoto Consensus fall out of the model. For example, their model demonstrates why the block time must be large compared to the network latency. It also convincingly demonstrates how selfish mining allows an adversary to produce more blocks than their share of the hash rate. And of course, it shows how liveness breaks down as an adversary's hash rate approaches 50% of the total network power. While none of these insights are novel, the model allows us to analyze these properties and reason about how they interact with each other.

Building a model of the Bitcoin protocol is valuable not only for proving properties of the protocol, but also for enabling more accurate modeling of blockchains. That said, no model is perfect and the Bitcoin Backbone Protocol cannot capture the complete protocol. For example, network level attacks like BGP routing attacks or eclipse attacks fall outside the scope of the model. Further, incentives are ignored in this model, so it can only describe "honest" and "malicious" nodes, rather than rational ones.

The map is not the territory, and formal models always entail some simplification. Cryptocurrencies, after all, are not merely consensus algorithms—they are also economic and political systems, and thus require a multidisciplinary approach if we want to understand them. But formal models like the Bitcoin Backbone Protocol ultimately enable researchers to work within a common framework.

# Review of “The Blockchain Folk Theorem”

**Paper by Bruno Biais, Christophe Bisière, Matthieu Bouvard and Catherine Casamatta. Jan 5, 2018**

**Review by: Daniel Aronoff, PhD Candidate MIT Department of Economics**

It is an ever-present possibility that a blockchain can diverge into two or more chains if blocks containing different transactions are appended to an existing block. There can be a myriad of incentives for such forks to emerge in a decentralized proof of work cryptocurrency with a ‘longest chain’ rule (it’s actually the chain with the most proof-of-work), like Bitcoin or Ethereum. For example, a miner who acquires (and maintains) >50% of the hash power could build an alternative blockchain starting from any prior block in the existing blockchain that is guaranteed to eventually become the longest blockchain. If the agents who transact and invest in the cryptocurrency continue to adhere to the ‘longest chain’ rule, they will treat the alternative blockchain as the true blockchain. This enables the attacker to create value in several ways: by earning the mining fees on the new blockchain, by selecting which transactions to place on the new blocks and by carrying out a ‘double spend’ attack.

One may, however, doubt that agents will continue to adhere to the longest chain rule after they become aware that the longest chain was created by an act (a double spend) which involved stealing tokens. Acquiescence to stealing does not seem a very satisfying response to a reorg for current and potential future victims. Yet, there is no obvious mechanism to guide their response, since ‘they’ are a group of mostly strangers. It can be difficult to coordinate a collective response.

Bruno Biais and his co-authors purport to explain why it is often rational to adhere to the longest chain rule, no matter the origin of the longest blockchain. They derive conditions under which mining the longest chain is a Markov Perfect Equilibrium (MPE). The key to their model is an unproven, primitive assumption that token value is an increasing function of the mining hash power devoted to a blockchain. When this holds, the equilibrium is determined by the balance of two forces incentivizing miners, who receive token rewards for solving block puzzles: (i) participating in the chain with greatest hash power ensures that the token reward for mining is maximized, and (ii) participating in a chain with less hash power increases the probability that a miner will solve a block puzzle (and earn a token reward). Adherence to the longest chain rule will always be an MPE when hash power is widely dispersed amongst miners - since the token reward for solving a mining puzzle on an alternative blockchain mined by a single miner with small hash power will have little value.

However, an MPE in which both the original chain and a fork are mined can emerge when there is a ‘sunspot’ on which miners can coordinate. Intuitively, force (ii) may induce a group of miners to work on an alternative blockchain, where they have a higher probability of earning (a lower) reward.

Perhaps the most striking result of the paper is a demonstration that when a miner accumulates >50% of the hash power and launches a double spend attack, there exists an MPE such that on the equilibrium path miners always mine the longest chain (except the miner who finds it profitable to make the attack, and creates the new blockchain). The implication is that, even if everyone knows a double spend attack has occurred, and tokens have been stolen, it is rational for everyone - including the victims of the attack - to recognize the alternative blockchain, thus legitimizing the theft.

The game theory employed in this paper is elegant and rigorous; but is it convincing? I do not think so.

Let’s go back to the key primitive assumption; that token value is a function of hash power. Why should this be? The authors support their claim by pointing to the correlation between Bitcoin hash power and token value over time. I would submit that the correlation holds for every proof of work cryptocurrency, but what of it?

A foundational assumption of classical economics was the labor theory of value. The idea is that the value of a commodity is determined by the labor time invested in creating it. Economists pointed to evidence of a correlation between labor and value to support the claim. The labor theory of value was rejected by the end of the 19th century, after the ‘marginalist revolution’ in economic theory offered a more satisfactory account of value as deriving from consumer preferences. The now ubiquitous supply and demand curve taught in Econ 101 is the result of that paradigm shift. The demand curve reflects consumer preference. There is a correlation between price and cost and supply– or, in the language of classical economics, labor- arising from the interaction of the demand curve and the supply curve. However, value, it is now understood, is not created by cost or supply alone. In order to determine the price and quantity of a commodity, we must know the demand function; we cannot pin down the equilibrium from the supply function alone.

What is crucially missing from the Biais et al paper is the demand function. How can we think about the value of Bitcoin, for example, without knowing anything at all about the people who trade and invest in it? Hash power is not informative; it will increase when token value rises because the increased reward for mining will attract new miners. That does not explain the reason for the increase in token value. Moreover, in the case of Bitcoin, nobody transacting or investing in tokens will even notice a change in hash power. The difficulty of the mining puzzle is calibrated to generate a constant transaction flow rate for any volume of hash power. Therefore, we must conclude that the assumption that token value is determined by hash power, which is crucial for the model constructed by Biais et al, is unsupportable.

This brings us back to the question raised at the beginning of this review. If there is a reorg that is commonly understood to have originated in a theft of tokens, surely agents will react in some way, and their collective reaction will determine which blockchains are recognized as legitimate and how much value will be attributed to each. This suggests that researchers should be looking at the incentives and market structure of all the players in the cryptocurrency market to determine the equilibrium of value and recognized blockchains. Biais et al have created a rigorous foundation upon which to think about this question, but much more work needs to be done.

# Previous Issues

# Other Great Curation

Lots of discussion around last issue’s Casper CBC review (1, 2)

**Reviews from the Ecosystem**

# Upcoming Conferences

Financial Cryptography and Data Security, Feb 18-22, St. Kitts (The DCI will be there. Please find us to chat about this journal!)

IEEE Security & Privacy on the Blockchain June 17-19, Stockholm, Sweden (Submission deadline Feb 18th 2019)

Great list of academic cryptocurrency conferences from BSafe.network

# Community Notes

Join our Telegram channel here for discussion on Cryptocurrency Research!

If you’d like to submit your own review, please do so here!

If you have any feedback, please just respond to this email directly. We’d love to hear from you.

If you would like to join our team to work on the Cryptocurrency Research Review, apply to be the managing editor! Visit the official job posting for more information.

Thanks!

*Disclaimer: All content is for informational purposes only and not intended as investment advice.*

3 |

## Create your profile

## Only paying subscribers can comment on this post

Sign in