Abstract: Multisignatures enable a group of signers to produce a single signature on a given message. Recently, Drijvers et al. (S&P’19) showed that all thus far proposed tworound multisignature schemes in the DL setting (without pairings) are insecure under concurrent sessions, i.e., if a single signer participates in multiple signing sessions concurrently. While Drijvers et al. improve the situation by constructing a secure tworound scheme, saving a round comes with the price of having less compact signatures. In particular, the signatures produced by their scheme are more than twice as large as Schnorr signatures, which arguably are the most natural and compact among all practical DL signatures and are therefore becoming popular in cryptographic applications (e.g., support for Schnorr signature verification has been proposed to be included in Bitcoin). If one needs a multisignature scheme that can be used as a dropin replacement for Schnorr signatures, then one is either forced to resort to a threeround scheme such as MuSig (Maxwell et al., DCC 2019) or MSDLpop (Boneh, Drijvers, and Neven, ASIACRYPT 2018), or to accept that signing sessions are only secure when run sequentially, which may be hard to enforce in practice, e.g., when the same signing key is used by multiple devices.
In this work, we propose MuSig2, a novel and simple tworound multisignature scheme variant of the MuSig scheme. Our scheme is the first multisignature scheme that simultaneously i) is secure under concurrent signing sessions, ii) supports key aggregation, iii) outputs ordinary Schnorr signatures, iv) needs only two communication rounds, and v) has similar signer complexity as regular Schnorr signatures. Furthermore, our scheme is the first multisignature scheme in the DL setting that supports preprocessing of all but one rounds, effectively enabling a noninteractive signing process, without forgoing security under concurrent sessions. The combination of all these features makes MuSig2 highly practical. We prove the security of MuSig2 under the onemore discrete logarithm (OMDL) assumption in the random oracle model, and the security of a more efficient variant in the combination of the random oracle and algebraic group models.
Read more on the Blockstream Blog.
]]>BIPSchnorr Abstract: This document proposes a standard for 64byte Schnorr signatures over the elliptic curve secp256k1.
BIPTaproot Abstract: This document proposes a new SegWit version 1 output type, with spending rules based on Taproot, Schnorr signatures, and Merkle branches.
BIPTapscript Abstract: This document specifies the semantics of the initial scripting system under BIP341 [BIPTaproot].
Protip: If you have troubles memorizing BIP numbers (like me), achow101 observed that BIPTaproot’s number, 341, are the reversed digits of BIP143. The segwit version 0 transaction digest is defined in BIP143 and version 1 digest is defined in BIP341.
]]>Abstract: MuSig is a multisignature scheme for Schnorr signatures, which supports key aggregation and is secure in the plain public key model. Standard derandomization techniques for discrete logarithmbased signatures such as RFC 6979, which make the signing procedure immune to catastrophic failures in the randomness generation, are not applicable to multisignatures as an attacker could trick an honest user into producing two different partial signatures with the same randomness, which would reveal the user’s secret key.
In this paper, we propose a variant of MuSig in which signers generate their nonce deterministically as a pseudorandom function of the message and all signers’ public keys and prove that they did so by providing a noninteractive zeroknowledge proof to their cosigners. The resulting scheme, which we call MuSigDN, is the first Schnorr multisignature scheme with deterministic signing. Therefore its signing protocol is robust against failures in the randomness generation as well as attacks trying to exploit the statefulness of the signing procedure, e.g., virtual machine rewinding attacks. As an additional benefit, a signing session in MuSigDN requires only two rounds instead of three as required by all previous Schnorr multisignatures including MuSig. To instantiate our construction, we identify a suitable algebraic pseudorandom function and provide an efficient implementation of this function as an arithmetic circuit. This makes it possible to realize MuSigDN efficiently using zeroknowledge proof frameworks for arithmetic circuits which support inputs given in Pedersen commitments, e.g., Bulletproofs. We demonstrate the practicality of our technique by implementing it for the secp256k1 elliptic curve used in Bitcoin.
Read more on the Blockstream Blog or watch the prerecorded talk we presented at the CCS 2020 conference.
]]>Reducing Bitcoin Transaction Sizes with xonly Pubkeys
This article is about the recent introduction of socalled xonly pubkeys to the Bitcoin Improvement Proposal BIPschnorr […] significantly reducing the weight of every transaction output without any loss in security. By removing the Ycoordinate byte from compressed public keys currently used in Bitcoin, public keys end up with a 32byte representation. We are going to look at how it works, why thatâ€™s useful, and sketch a security proof.
Using BIPSchnorrbased multisignatures, no matter how many signers are involved, the result is a single public key and a single signature indistinguishable from a regular, singlesigner BIPSchnorr signature. This article is about optimizing implementations of multisignature protocols and why seemingly harmless changes can totally break the security.
At Breaking Bitcoin 2019 in Amsterdam I gave a talk about how to build secure protocols on BIPtaproot or more specifically how to avoid the dangers we learned about so far. There was not enough time to cover everything. The talk also gives an introduction to how to use our MuSig implementation in libsecp256k1zkp. The video recording is on youtube (slides). Thanks to kanzure there’s also a transcript of the talk.
Erratum: MuSig nonces can not be preshared. Only nonce commitments. See https://github.com/ElementsProject/secp256k1zkp/pull/73 for details.
]]>nixbitcoin (github.com/fortnix/nixbitcoin) is a project I contribute to in my spare time that provides nix packages and nixos modules for easily installing Bitcoin nodes and higher layer protocols. The initial idea was to build myself a lightning node in a reproducible way. I talked more about the motivation and how to use it at the LightningHackdayMUC (video, slides).
]]>Last weekend a bunch of hackers assembled for the 3rd Lightning Netword Hackday in Berlin. The event was packed with interesting sessions, neat hacks and exciting discussions which were concluded with the traditional dinner & drinks at ROOM77. I gave a talk about “Schnorr and Taproot in Lightning” (slides, video) focusing on privacy and security implications.
]]>At the recent Building on Bitcoin conference in Lisbon I gave a talk about a few new ideas in the scriptless scripts framework. The first part was mainly about blind coinswaps, which is a way to swap bitcoins with a tumbler without revealing which coin are swapped. The second part about how to exchange ecash tokens peertopeer using scriptless scripts and Brands credentials. You can find the talk on youtube and the slides here. Thanks to kanzure there’s also a transcript of the talk.
EDIT: I’ve added a note about the security of Blind Schnorr signatures against forgery to the slides.
In short, a naive implementation of the scheme is vulnerable to Wagner’s attack.
An attacker can forge a signature using 65536 parallel signing sessions and O(2^32)
work.
There are already a few good explanations of the bug, for example at the Monero StackExchange and the modern crypto mailing list. This article gives additional background about the signature scheme and the properties of the curve that allowed this bug to slip in.
Apart from the value at stake, this bug is interesting because it shows the risks of breaking a specialized cryptosystems such as Ed25519 apart and apply the parts in other contexts. Ed25519 is designed for plain cryptographic signatures and the curve it is based on is used in CryptoNote to implement onetime ring signatures. In contrast to a regular signature scheme, onetime ring signatures using the curve require that part of the signature does not generate a small subgroup. Ensuring this is necessary when using curves with a cofactor. CryptoNote did not do that.
A ring signature proves that the signer is among a set of public keys (aka “the ring”), without revealing which public key belongs to the signer. The construction used in CryptoNote/Monero and also for example in Confidential Transactions is based on hash rings.
For simplicity assume that for now the ring only consists of 1 key, which essentially reduces the scheme to a Schnorr signature.
As usual, G
is a generator of cyclic group in which the discrete logarithm is hard and we’re using additive notation for group operations.
Then the signature scheme consists of the of the following three algorithms (keygen, sign, verify):
1 2 3 4 5 6 7 8 9 10 11 12 13 

Let’s get a basic informal understanding for why such Schnorr based schemes work by taking the perspective of Eve, who does not know the discrete logarithm x
of P
.
Obviously, Eve would invalidate the signature when attempting to just change the message m
.
Further, when trying to fake a signature without knowing x
Eve can not to just set s
as in the regular signing algorithm.
But to make a signature that passes verification for some public key P
Eve must find s
, s.t. k*G = s*G + e*P
.
We can rearrange that in the following way:
1 2 3 

That means that if she would find such an s
she could compute the discrete logarithm of P
.
This is a contradiction.
Note that during verification the output of the hash function e
is also part of the input to the hash function.
How about during signing Eve chooses s
at random and then simply hashes s*G + e*P
?
The problem is that the properties of a cryptographic hash function prevent Eve from knowing e
before before evaluating the hash function.
So e
can not be fed into the hash function and as a consequence s
must be chosen to account for e
only after hashing.
Rings of size one naturally don’t make a lot of sense but are sufficient for this post’s purpose. The curious can for example have a look at the explanation in the Borromean signature paper(section 2.2).
Onetime ring signatures are used in CryptoNote to allow combining the privacy properties of ring signatures with a mechanism to detect double spending.
This is done by introducing the concept of a “key image”.
The key image is a group element that is deterministically derived from a key but in itself doesn’t reveal anything about the key.
Define hashp
to be a hash function that hashes to an element in the group.
Then the key image I
for the key pair (x, P=x*G)
is I = x*hashp(P)
so P
and I
have the same discrete logarithm.
A onetime ring signature includes the key image belonging to the signer.
The CryptoNote protocol allows using ring signatures when spending coins by enforcing that each key image can occur only once in the blockchain.
Let’s for example assume there are two unspent coins – in our case just represented by public keys P1
and P2
.
Alice knows the private key to P1
, so she can spend the coin by providing a onetime ring signature with P1
, P2
and the key image I
corresponding to P1
.
An observer can not tell whether P1
or P2
was spend.
But if Alice would attempt to spend P1
again (even with a different ring) she would require the same key image which is rejected by the network.
However, P2
can still be spent because the signature uses a different key image.
Now the concrete onetime ring signature scheme – again shown only for rings of size 1:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 

Intuitively, it’s very hard to create a signature where I != x*hashp(P)
because the same s
that is used to prove knowledge of the discrete logarithm x
of P
is also used for I
.
The onetime ring signature scheme for rings larger than 1 is described in the CryptoNote whitepaper(section 4.4) although in a less space efficient way. The construction is also related to the proof of discrete log equality used in perfectly binding Confidential Transactions. Note that actually by now a more general scheme that is based on onetime ring signatures called Ring Confidential Transactions has replaced regular onetime ring signatures in Monero.
As mentioned in the beginning, CryptoNote uses Ed25519’s curve (also referred to simply as “Ed25519”) to represent its group elements.
One of Ed25519’s properties is that the number of points on the curve (curve order) is larger than the number of points in the group generated by the base point G
(group order).
The group order is the prime l = 2^252 + 27742317777372353535851937790883648493
and the curve order is 8*l
The ratio of the curve order and the group order is known as the cofactor which is 8 in the case of Ed25519.
This is, for example, different to the curve secp256k1 used in Bitcoin which has cofactor 1.
The cofactor indicates that there are groups of low order on the curve.
For example, let P
be the point represented by 26e8958fc2b227b045c3f489f2ef98f0d5dfac05d3c63339b13802886d53fc05
then P
generates the (unique) group of order 8
which implies 8*P = 0
.
There are only few points on the curve with low order.
One way to find them is to generate a random point on the curve with order a*l
and then multiply by l
to get a point of order 1 <= a <= 8
.
On the other hand, hashp
ensures that the resulting point is in the prime order group by multiplying it by 8
before outputting it.
In the case of the regular Ed25519 signature algorithm it doesn’t matter if a public key is of low order.
But Ed25519 implementations make sure that a private key is a multiple of 8 (“clamping”), so when multiplying it with a low order point then the result is always 0 (instead of leaking bits from the private key).
Therefore, after running the DiffieHellman protocol on such a curve the key shared between the two parties can be 0
if one party doesn’t behave “contributory”.
Assume the attacker owns a coin and therefore can create a regular onetime ring signature to spend it with key image I = x*hashp(P)
.
The attacker can spend the coin again with key I' = I + L
where L
is a low order point with order o
.
Remember the verification equation includes s*hashp(P) + e*I
.
If o
divides e
, (e = e'*o
) then e*I' = e*I + e'*o*L = e*I
so a valid signature with I'
can created in the same way as with I
(except that the message m
now has to commit to I'
).
Since o
is at most 8
it is easy to to retry signing until there is a suitable hash e
.
Interestingly, in ByteCoin this was exploited in a much less effective way.
The attacker used a low order public key P
requiring a low order key image I
.
Because there are only 8 low order points on the curve (multiple representations of the same point are disallowed in CryptoNote) this attack can be only performed 8 times in one blockchain.
The fix implemented in Monero is to verify that each key image I
generates a group of the prime order l
by checking that l*I = 0
.
If I
actually had order l' != l
then for l*I = 0
to hold, l
must be divisible by l'
which is a contradiction because l
is prime.
So the additional cost introduced by the fix is one scalar multiplication per transaction input.
This article gave yet another example for how cryptographic parts can not be easily repurposed. In particular, when implementing more complex protocols based on curves with a cofactor (like for example Ed25519 or Curve25519) the group order of user supplied generator points should always be verified. Deciding casebycase whether that’s necessary is quite dangerous in practice. There is, however, potential to get rid of this bug class for some cryptosystems by eliminating cofactors through point compression (see Decaf). Alternatively, when designing a new cryptosystem it should be considered to use a prime order curve such as secp256k1.
]]>I am a complete outsider to Monero and especially the Monero development community, but having reviewed the CT design and implementation (in libsecp256k1) extensively during my day job, I was very interested in the design decisions underlying RingCT. Very quickly I found a red flag in the ring signature scheme called ASNL used in the range proofs. This scheme is a new contribution by the paper and indeed turned out to be exploitable such that an attacker would be able to create coins from nothing. You can find the exploit code on GitHub and a detailed explanation in this post.
While writing the exploit code and preparing this blog post I learned that an anonymous person called RandomRun reported a flaw in the security proof of ASNL, which convinced the Monero devs to publish a bugfix release that switches to Borromean signatures (good call!). As a result the upcoming hard fork will not be vulnerable to this exploit. Interestingly, the error in the security proof is exactly the flipside of the vulnerability discussed in this post.
EDIT: The Monero community reacted to this article (see reddit) but they didn’t like its style. Also, they got the timeline of the discovery of the bug wrong.
I have the highest respect for RandomRun and parts of the Monero community. It takes an incredibly strong character to drop an 0day worth tens of millions USD. However, that the original hard fork schedule of RingCT remains unchanged despite a complete break of the system raises more than a few questions. Even more so when the author of RingCT called for more review by the end of October.
Confidential transactions include a range proof to prevent negative amounts.
These range proofs use a generalization of ring signatures in which
the conjunction of multiple rings is proven, for example that the prover knows the discrete logarithm of (Pk1 OR Pk2) AND (Pk1 OR Pk3) AND ...
The original CT scheme introduced Borromean signatures for that purpose which are based on rings of hashes and provide space savings when public keys appear more than once.
Instead, the RingCT paper proposes a new scheme called Aggregate Schnorr Nonlinkable Ring Signature because it has “perhaps simpler security proofs” (RingCT paper).
A ASNL signature consists tuples (P1_j, P2_j, L1_j, s2_j)
for j = 1, ..., n
and s
which
is supposed to prove that the signer knows the DL of (P1_1 OR P2_1) AND ... AND (P1_n OR P2_n)
.
Let’s consider the n = 1
case (no conjunction) informally.
The verifier checks that
1


where H
is a hash function.
So either
x
of P1
then sets1 2 3 

x
of P2
then sets1 2 3 

In the case of multiple conjunctions (n > 1
), the verifier computes LHS < L1_1 + ... L1_n
and RHS < s*G + H(s2_1*G + H(L1_1)P2_1)P1_1 + ... + H(s2_n*G + H(L1_n)P2_n)P1_n
and checks that LHS = RHS
.
In short, this is vulnerable because you can just choose some L1_j
such that it cancels out the summand on the right hand side where both DLs of P1 and P2 are unknown.
In contrast, the “proof” of security of ASNL assumes that any adversaries knows a
s.t. a*G = L1_j
for all j
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 

Abstract
We analyse the performance of several clustering algorithms in the digital peer topeer currency Bitcoin. Clustering in Bitcoin refers to the task of finding addresses that belongs to the same wallet as a given address. In order to assess the effectiveness of clustering strategies we exploit a vulner ability in the implementation of Connection Bloom Filtering to capture ground truth data about 37,585 Bitcoin wallets and the addresses they own. In addition to wellknown clustering techniques, we introduce two new strategies, apply them on addresses of the collected wallets and evaluate precision and recall using the ground truth. Due to the nature of the Connection Bloom Filtering vulnerability the data we collect is not without errors. We present a method to correct the performance metrics in the presence of such inaccuracies. Our results demonstrate that even modern wallet software can not protect its users properly. Even with the most basic clustering technique known as multi input heuristic, an adversary can guess on average 68.59% addresses of a victim. We show that this metric can be further improved by combining several more sophisticated heuristics.
As we’ve seen over the last two days scalability is a multidimensional problem. One of the main topics of research is increasing the blocksize to increase transaction throughput. The assumption is that as technological progress is continuing and transaction throughput is increased accordingly, the cost for runnning a fully validating node stays constant.
However, blocksize proposals usually only directly change one aspect of full node costs – the blocksize. The actual costs for a node are composed of multiple factors, such as the resources required to validate a block or to store the utxos. And these factors are not necessarily in a linear relationship with each other. This has been discussed more detailed in Mark’s talk at the previous Scaling Bitcoin conference.
The most prominent example for showing nonlinear relationships consists of putting as many OP_CHECKSIG operations into a single transaction as possible. For each checksig operation, the whole transaction is hashed and a signature is verified. Assuming 1MB blocks, it is possible to create a block that takes more than 10 minutes to validate on my 2014 laptop. It is clear that each proposal that increases blocksize also needs a strategy to deal with these nonlinearities.
One of those strategies is to put a hard limit the number of signature verifications and the number of bytes that are hashed for a block to be valid. We see some problems with this approach: First, as it stands there is no intuitive way to choose these limits nor how they grow with the blocksize. Second, there are other factors that influence validation cost, which might not relevant now, but could get significant in bigger blocks if not properly limited. For example, it is possible to create a 1MB block that takes 5 seconds to validate on my laptop, which just consists of as many HASH opcodes as possible. And third, placing hard limits on certain factors completely ignores the relationship between those factors.
These relationships exist, because thiose factors influence validation cost in some way. This brings us to the concept of cost metrics.
The goal of the cost metric approach is to tie consensus rules to actual resource requirements. The idea is that cost of a block is a function of certain block properties. As an example, the block cost could be represented by a weighted sum of block size, validation cost and utxo growth.
When we have agreed on such a cost metric, we can get rid of the hard limits and instead introduce a new consensus rule that blocks need to cost less than a threshold to be valid.
One aspect of a full cost function are validationcost. We can view validation cost as the time it take to validate a block on a reference machine. Then we can introduce a threshold saying that a block is not allowed to exceed 30 seconds validation time on a reference machine. In other words, we want to find a function from block features like the number of bytes that are hashed for signature validation to validation time on the reference machine. To do that, we assume a simple model function that states that the validation duration is a linear combination of block features, collect data about the actual validation duration on that machine and then fit the model to the data.
The one dimensional situation is depicted in the right, there is one data point for each block consisting of the number of bytes that were hashed and the time it took to validate. With this data it is possible to determine the effect or coefficient of hashing on validation time which is represented as a line in the plot. This coefficient can then be used in a consensus rule.
MAYBE: If we assume that the resources involved grow at the same speed, this kind of metric can be naturally scaled by multiplying the whole equation with the inverse of the growth factor.
Validation cost is affected first and foremost by OP_CHECKSIG, that is signature verification and hashing the transaction. Bitcoin Core already limits the number of OP_CHECKSIGs but this is insufficient for our case because what counts are the number of OP_CHECKSIGs that are executed. We built on Gavin Andresen’s code to count those factors while validating transactions. We also record hashing via the OP_HASH opcodes, and how many bytes are written and removed from the stack. And the number of inputs which loosely corresponds to the number of lookups in the utxo set. And of course we also measured our dependent variable, the ConnectBlock duration on the reference machine.
As a reference machine we used my laptop, which has two 3gHz i7 cores. To collect block feature data and the corresponding ConnectBlock duration, we reindexed mainchain, testchain and custom regtest chains which for example consisted hardtovalidate blocks. I found out that I could comfortably use the computer while using only 5GB of 8GB RAM, so I set the dbcache option to 3GB. dbcache determines how much data is cached in memory We ran Bitcoin Core version 0.11.2 with libsecp validation and disabled checkpoints.
After estimating the coefficients using linear regression, we get useful information like for each kilobyte of hashing validation takes 0.005 millisecond longer for each signature verification it takes 0.1 millisecond longer. Other features do not play a comparably significant role at the moment, even though it is possible to create a block that takes around 5 seconds to validate and only consists of hash opcodes.
The validation cost function fit is very accurate: for a random test selection of test and mainnet we get an average absolute error of less than 4 ms. Most importantly, the estimated function is able to predict hardtovalidate blocks very accurately: The one tested example was a block that took 130.4ms to validate, 131.7 was predicted.
So, now we derived a validation cost metric that corresponds to validation time on a reference machine and we can define a new consensus rule that would require a block to have a smaller validation cost than some threshold. After picking a threshold, there would be a situation like in this plot, where xaxis is the block size, yaxis validation time and the green area represents the space of valid blocks.
However, picking another threshold is difficult. because there is no one size fits all solution: (1) you don’t want to constrain potential use cases but (2) and you also don’t want want to sum validation time and bandwidth worst cases.
On the other hand, we can try to relate bandwidth requirements and validation cost using a simple weighted sum for example and then pick a single threshold.
And this is exactly the idea behind the costmetric, find all factors affecting node cost and how exactly they influence node costs and then pick a reasonable cost threshold. And what this idea really entails is moving away from blocksize proposals to arguing about total node costs.
Now the question is how exactly do you convert bandwidth requirements, validation time to cost? Does it make sense to trade off one second of network latency with one second of validation duration? How do we bring additional cost factors in, like utxo set size? How futureproof is that solution?
There is certainly no single correct answer to these questions. We can, however, show the advantages of a cost function while building on existing block size proposals. Most block size proposals consider average use at a specific maximum block size. So in terms of cost threshold it would make a lot of sense to allow maximum sized blocks only in combination with average validation time. In this way we can prevent blocks that have both a worstcase size and worstcase validation time. We get the average validation duration for a specific block size using the data we collected earlier with the reference machine.
Also we set a hard limit validation cost of 10 seconds, which seems reasonable because the maximum validation time on the reference machine was 6 seconds. to the average validation time at the maximum blocksize. Then we allow to linearly interpolate between the maximum validation time at half of the maximum blocksize
This shows an advantages of a cost metric: we constrain the worst case by bringing it closer to the average case, and still allow possible future usecases which require a lot of validation resources.
So far, the cost of maintaining the utxos has not played a role in Bitcoin. In fact with a 1MB block, the worst case utxo set size increase is almost 1MB, whereas the average over the past year is an increase of around 11kilobyte. Finding a reasonable place in the cost function is even more complicated than validation and bandwidth resources, in part because they are longterm costs. The current situation with Bitcoin is that there is no incentive to avoid increasing the utxo set size if possible. This can be as simple as moving bytes from the scriptSig to the scriptPubKey. What we can do with the cost function is placing a slight incentive to include transactions that reduce the utxo set size and thereby cheapen them. The proposed way to do this is allowing a larger validation costs when the block reduces the utxo set size. This aligns well with the fact that blocks that sweep a lot of utxos have rather extreme validation costs due to the high ratio of inputs to outputs and we want these blocks to be valid because they are extremely beneficial.
In order to determine a specific function one can compute the maximum possible decrease of utxo set size for a block of maximum size. Then linearly interpolate such that for each byte the utxo set is reduced the maximum allowed validation costs are increased until we reach let’s say half of the remaining validation cost. This rule does not give the utxo set size the prominent place in the cost function it would deserve but at least moves incentives in the right direction.
This cost function can trivially grow with the blocksize, by multiplying the validation cost limit and average validation cost with the same scaling factor. So if the blocksize is doubled, then double max validation cost point and double max validation cost and double average transaction
This situation is shown in the plot for 1MB, 2MB and 4MB maximum block sizes.
It ensures that the worst case validation time scales as fast as the block size, which is an implicit assumption underlying many blocksize proposals. Also it guarantees that average blocks are always allowed to have the maximum block size.
In conclusion, Bitcoin places various resource requirements on full nodes. And it is essential that blocksize proposals account at least for the most important ones, or extreme worst cases are . A cost metric helps with that because it sets the requirements in relation to each other.
We’ve seen that estimating a function for validation cost only, is straightforward, when assuming a reference machine, collecting data and fitting a linear function.
A more complete cost function that includes bandwidth, validation and utxo requirements is difficult to derive from the bottom up. But as we showed we can build on existing blocksize proposals to get some of the advantages of a cost metric, * such as confining the worstcase while * allowing to tradeoff various block aspects * and setting the right incentives.
]]>CSGOJackpot is a gambling website where players bet and win Counter Strike Go ‘skins’ (weapon textures).
Because these items can only be found by playing a lot of CSGo, they are quite rare and valuable,
and can be exchanged for example in Steam’s own Marketplace.
What is fascinating about CSGOJackpot and initially captured my attention is the sheer amount
of value that is gambled away. On average, more than 20,000$ are thrown into the pots per hour.
TL;DR: CSGOJackpot is a node.js app that uses Math.random() to determine the winning ticket. Of course, it’s not cryptographically secure and trivial to predict the next number given two outputs of the random number generator. I did not try to profit from this vulnerability but for the lulz I set up a twitch stream and revealed the next winning percentage in exchange for a drawing of Gabe Newell. See the submission gallery and a recording of the stream.
EDIT: There was quite some discussion about this issue on /r/GlobalOffensive.
EDIT 2: This vulnerability does not exist anymore in CSGOJackpot and I don’t know a similar site which is vulnerable.
CSGOJackpot works like this:
In addition to guessing the winning percentage, an attacker has to know the total number of tickets to be sure to win the pot. So, he has to try to place the last bet which can be tricky and is very difficult during times of high traffic because of huge lags.
The HTML showed some signs of node.js, so my hypothesis was that the site simply uses javascript’s Math.random() to determine the winning percentage. Fortunately, the full winning percentage with up to 16 digits is published after the end of a round, which is exactly the amount of digits I got when I executed Math.random() on my machine. Node.js uses the V8 javascript engine and its implementation of Math.random() (nodejs 0.12.X) is as follows:
1 2 3 4 5 6 7 8 9 

This is known as Marsaglia’s MultiplywithCarry. Note that the implementation used in nodejs 0.10.X uses a very similar algorithm, but it’s implemented in C and the conversion to floating point is done differently.
So the RNG state has 64 bits and 32 bits immediately leak from a single output. Given two subsequent outputs one can bruteforce the remaining 32 bits of the states which takes about 30 seconds on a 3.0Ghz i7 core (implemented in C). However, this failed to produce the correct state, so my guess was that there are some calls to Math.random() in between two winning percentages. It turned out that the the number of calls between varies between 8 and 35 and brute forcing this required a third winning percentage and around 5 hours in expectancy. So now I had the correct state, which I verified by creating the next 50 numbers and checking if they contained the next winning percentages. But I didn’t find any pattern by which I could determine which of the next random numbers is going to be winning percentage. Fortunately, there is another feature of CSGOJackpot which made this trivial.
The site claims to be provably fair. But this is not really the case. What they are doing is a simple commitment to the winning percentage
, by publishing a hash md5(blinding + winning percentage)
before the round (where the blinding is a uniformly random hexstring)
and revealing the blinding and winning percentage at the end of the round. Thus, they can not adjust the winning percentage to their
liking during or after the round. But, naturally, provable fairness implies that even the server does not know the winning percentage ahead of time.
However, this feature made it possible to reliably predict the next winning percentage.
I observed that the blinding just consists of two calls to Math.random() which were converted to hex with toString(16).substr(2,4)
and then concatenated. So now I just had to step through the next winning percentage candidates and the next blinding candidates and
check if their hash matched the commitment.
One more word to provably fairness. It’s quite annoying to see CSGOJackpot and the many other sites that work similarly to make exactly the same false claim. I’m not a cryptographer so take the following with a grain of salt and I’d be happy learn if I’m missing something important. A truly fair scheme seems to be possible although much more complex to implement. The underlying problem is known as coin flipping. In a two player setting you can have each player commit to a value and then XOR the value in the reveal phase to get a statistically independent result. This is how for example Satoshi Dice achieves some level of fairness.
However, in a multiparty setting (assuming the existence of a broadcast channel), this can be trivially Sybil attacked. An attacker could create multiple identities and refuse to reveal one of his commitments, if another one of his identities wins the pot. A trivial Sybilresistant construction would have each player loose more when not revealing than what is in the pot, but this does not seem really practical. Another approach is to use timelock encryption instead of commitments, which means that after a some time everybody can decrypt the value without having access to the key.
I didn’t play this game at all (it would have been unfair :) ), but for the lulz I had to at least troll them a bit. So I set up a twitch stream where I was revealing the next winning percentages in exchange for a drawing of Gabe Newell. I privately disclosed the bug to the administrator the moment I started the stream.
Sorry for the bad quality in the beginning of the recording, it gets better at the 5:04 minute mark.
See also the submission gallery.
After 2 hours of fun they fixed the issue.
Interestingly, googling “nodejs cryptographically secure random number generator” did not really result in plugandplay solutions for me.
Without knowing about the pitfalls of javascript I suggested to use crypto.randomBytes(4).readUIntLE(0, 4) / 0xFFFFFFFF
(if this is somehow wrong please
write me a message).
Unfortunately, so far they didn’t remove the “provable fairness” claim.
One of the things that must not happen during regular Bitcoin operation are forks. A fork occurs when there is a new block $B_{i+1}$ which is a valid successor to block $B_i$ for some set of Bitcoin nodes $N_v$ and invalid for the remaining nodes $N_{\neg v}$. Therefore, miners in $N_v$ will mine new blocks on top of $B_{i+1}$ and miners in $N_{\neg v}$ will still mine on $B_i$. As long as the majority of hashpower is in $N_{\neg v}$, the chain divergence will be resolved after some time, because $N_{\neg v}$’s chain will eventually get longer than $N_v$’s chain and then the nodes in $N_v$ will switch to $N_{\neg v}$’s chain. This is due to the nature of the blockchain: nodes always trust the longest valid chain (more exact: the chain with the most proof of work).
Consider for example the case of an update to the Bitcoin reference implementation that restricts valid signature encodings. $N_v$ are the nodes running the old Bitcoin version and $N_{\neg v}$ run the new version. As soon as the hash power of $N_{\neg v}$ exceeds some threshold the new consensus rule can be safely activated. In the context of Bitcoin updates this is called a softfork: a valid block becomes invalid in the new version. On the other hand, a hardfork occurs when an invalid block is valid in a new version, for example by raising the maximum block size limit. Then nodes that run the old version are represented by $N_{\neg v}$. Even if the majority of hashpower is in $N_v$, the nodes in $N_{\neg v}$ can never switch to $N_v$’s chain because some blocks are invalid for them. Therefore, in the case of a hardfork all nodes are required to update.
Forks in practice do not only happen deliberately because of updating mechanisms but can also be triggered by bugs. Bitcoin reimplementations such as libbitcoin, btcd, bitcore and toshi are particularly vulnerable to these bugs because they have to match exactly the behavior of the Bitcoin reference implementation. In order to abstract part of the consensus critical code and allow other projects to use it, Bitcoin Core developers created the bitcoinconsensus library. I am not aware of any reimplementation that already adopted libbitcoinconsensus. Right now, it only has a single function bitcoinconsensus_script_verify, which takes an output script and a transaction and returns if the transaction is allowed to spend the output.
Among other conditions, a transaction is valid if the top stack item is different from 0 after script execution. Bitcoin script is much more powerful than just verifying signatures and therefore I was curious to find interesting scripts, i.e. scripts that trigger unusual edge cases. I’ve recently heard about successes with aflfuzz whose heuristic using code coverage seemed to be particularly well suited for the task. Also, it has the capability to minimize a set of inputs such that the code coverage stays the same. After fuzzing libbitcoinconsensus for two weeks I supplied the inputs to btcd’s txscript, a reimplementation in golang, and checked if the outputs differ.
The first bug I found was in btcd’s implementation of the OP_IFDUP opcode. This opcode pushes the top stack element on the stack if it differs from 0. Because of a type conversion in btcd, a stack element that exceeds 4 bytes would have never been copied, which differs from bitcoinconsensus’ implementation of the opcode. The second bug concerned the representation of the result of OP_EQUAL. This opcode compares the two top stack elements and pushes the result on the stack. In Bitcoin Core, if the comparison fails an empty byte array is pushed on the stack. Btcd however pushed a byte array containing 0. This means that the following script would be valid in bitcoinconsensus and invalid in btcd (Note that OP_0 pushes an empty byte array to the stack):
1


Both bugs would have triggered hardforks. An attacker could simply broadcast a transaction with the affected scripts and it would be mined subsequently. Btcd would have not been able to include the block into its chain and would become stuck on the last block. Therefore, an attacker could create a block on top of btcd’s chain paying a merchant running btcd without affecting his ‘real’ coins on the main chain. Note that the attacker would not race against the hashpower of Bitcoin miners.
Dave Collins from the btcd team fixed these issues very fast and additionally improved the test coverage in Bitcoin Core for the affected and more opcodes. Additionally, he was so kind to award me with 0.5 bitcoin for the find.
You can find the result of the fuzzing, the code to produce them and test reimplementation in the bitcoinconsensus_testcases repository. If you are interested you can start fuzzing yourself and submit a pull request with new scripts you found. Also, I’ve executed the testcases only with btcd and bitcore so far.
]]>As part of the bug bounty program I was awarded with 20 Bitcoin.
]]>In this post I show that the first stage of the attack, namely learning the nodes a victim is directly connected to can be done with a single connection to the victim. In addition to BKP’s attack, knowing all outbound peers of a client could significantly increase the success probability of a double spend. Note that all experiments are based on Bitcoin Core 0.9.4, but 0.10.0 shows the same behavior.
TLDR The attacker can reliably guess all of the outbound connections of a victim by making a selection from the known addresses of a victim based on the timestamp of the addresses.
Update A fix has been merged to bitcoind. The timestamp is not updated anymore when receiving a message from a connected peer. Instead, it is only updated when the peer disconnects. The fix is released in bitcoin core 0.10.1.
When a node $n$ connects to another peer $p$ in the network it advertises its address using the “addr” message. The peer will select a number of its own peers at random which are “responsible” for $n$’s address. Then the address is forwarded to responsible peers to spread the knowledge about $n$ in the network. The number of responsible peers is either $1$ or $2$ depending on whether the address is reachable by $p$.
BKP’s attack works by recording the set of peers that first propagated a victim’s address. In order to have good chance to be in the set of responsible peers for the address, the attacker has to hold a significant number of connections to each full node in the network. Note that it is possible to have multiple connections from a single public address to a peer.
It turns out that an attacker can simply infer the peers of a victim by sending getaddr messages to him.
In bitcoin, the address structures that are send via the addr message do not only contain the IP adress and port but also a timestamp. The timestamp’s role is ensuring that terminated nodes vanish from the networks knowledge and it is regular refreshed by the nodes which have an interaction (more about that later) with the peer at that address. Bitcoin nodes usually record the addresses they hear about and send them in a reply to a getaddr using the addr message.
The following experiments show that an attacker can guess some or all of the direct peers of a victim by sorting the known addresses of the victim based on the timestamp.
A minor obstacle is that a node replies to a single getaddr message only with maximal 2500 addrs selected uniformly at random. In order to get a certain percentage $\tau$ of the known addresses of a node the attacker has to send multiple getaddr messages and record the percentage that is new to her.
1 2 3 4 5 6 7 8 9 

Experiments show that if we wait 10 seconds after each getaddr request it takes around $3.5$ minutes to collect $\tau$ percent addresses ($13,500$ in this case).
I set up a victim node $v$, which is just a regular bitcoin node. The attacker $a$ is a node that connects to $v$ via the P2P network and queries the known nodes of $v$. Second, $a$ connects to $v$ via the RPC interface and gets the true peers.
The attacker code (btcP2PStruct) is available on github. Thanks to the btcwire package it is very simple to write this kind of code.
You can find all the data to produce the graphs in the project repository.
First we consider the case where $v$ does not accept incoming connections (“client” in BKP’s terms). $v$ was running for 2 days and I recorded data for every hour but I will only discuss the last measurement because the data is very similar.
Note that $v$ returned $12,868$ known addresses. Also, a client usually has maximally 8 peers due to the default maximum number of outbound connections. This implies that an attacker can not start start this attack on a client that is not connected to her. Here we see that if the attacker obtains all peers of $v$ (without any false positives in this case).
Next, the case for the full node, which I left running for 8 days.
Again it is evident that an attacker can reliably determine all outbound connections of the victim using a threshold of 20 minutes. However, inbound peers can only be detected very poorly.
The reason for finding all outbound peers is is this logic in bitcoincore which refreshes the timestamp on every message of outbound nodes.
BKP mention a neat trick how to determine if two nodes $v_1$ and $v_2$ are connected. First, the attacker connect to $v_1$ and $v_2$ and send addr messages containing bogus addresses to $v_1$. Then, she counts the number of times one of these addresses is received from $v_2$. However, the authors leave open how many messages you need send to be certain about the hypothesis.
As we already know, the address is forwarded only to two responsible nodes so we have to compute the probabilities of our node being responsible. Using the binomial distribution we can compute the likelihood of receiving a certain number of addresses back given that we sent a certain number of addresses.
I’ve done the math using this code and some assumptions regarding the structure (edges are uniformly iid). Also, the attacker has to know or approximate the number of peers of a node, which can be done with a similar method than the one described. Connect two times to the victim, send and note the ratio of returned addr messages. If you can not connect to the node, it will most likely have 8 peers.
This theoretical model shows that that if $v_1$ is a full node and $v_2$ is a client then we need about 2000 messages to determine if they are connected with 95% probability. Similarly, if $v_1$ and $v_2$ are full nodes, the attacker needs to send 20000 messages.
However, in order to remain polite in the network this attack needs start from a candidate set of nodes. Therefore, it could be a useful method to remove the false positives which were obtained with the “getaddr”fingerprint.
It should be pointed out that even if you know a victim’s entry nodes you can not simply connect to those few and listen for transactions. This is because “trickling” prevents estimating the origin of a transaction without further assumptions or doing BKP’s Sybil attack. However, knowing all outbound peers of a client could significantly increase the success probability of a double spend.
Update The fix removes the update every 20 minutes and updates on disconnect
]]>