Security Risk
benyanke opened this issue · 2 comments
This is a big security risk - it reduces the keys strength for the user.
I'd really encourage you to - at minimum, consider adding a BIG disclaimer to the readme, and perhaps even the software, that this is for experimental purposes only, and that use in production will reduce your operational security on your wireguard server.
Reducing key entropy by limiting your key space is - by definition - reducing the security of a wireguard install.
It's a neat idea, but it's just not secure.
Thanks for the attention! I don't think we're reducing the keyspace in a way that affects security, but let me try a couple of different arguments for you and see if you agree.
WireGuard basically reveals the public key of a VPN server to all clients of that VPN. An attacker might be a legitimate client of that VPN, but because they're evil, they want to learn the server's private key so they can spoof the server to the other clients, and learn their secrets. So given a public key, they want to find a matching private key. This is hard, because the private key -> public key
function is a "one-way function": it's easy to convert private to public, but computationally infeasible to convert public to private any faster than a brute-force search of the entire keyspace.
My primary safety argument is that this vanity-public-key scheme is filtering the public key, not the private key. We generate the keypairs by choosing private keys at random, finding the corresponding public key (via scalar multiplication), then examining the public key to see whether we like it or not (the predicate checks that a given target string is present somewhere in a short prefix of the lowercased Base64-encoded public key).
This reduces the number of possible keypairs, but not in a way that is useful to the attacker. The attacker already knows the public key, the only thing they don't know is which private key goes along with it. Knowing that the public key also obeys some other constraints doesn't reduce their uncertainty: they have no uncertainty at all.
The Curve25519 system used by WireGuard has about 2**252
distinct public/private keypairs. Call this gigantic number S
. By applying a filter to the Base64 representation of the public key, we're restricting our keyspace from S
down to S / F
, where F
is a factor of about 32 for each letter of the desired target string, and a bit less when the target is shorter than the prefix. A typical use case for this tool would be to find a five-character target inside a ten-character leading prefix, which gives us an F
of 5.6 million.
It's true that there are only S / F
private keys that could generate a suitable address, but the attacker doesn't know which ones they are. In fact, there is only one private key that could generate the exact public key being attacked. The scalarmult operation is basically a random mapping, so a constraint on the output doesn't lead to a constraint on the input that's usable with less effort than a brute-force search.
So applying a constraint to the public key doesn't make the attacker's job easier at all. The only way it could help them is if they already had a list of all the private keys whose public key matched our constraint (then they could search just the keys in that list, instead of all possible keys). For a five-character target, this list is (effectively) infinitely long (S / 5.6M ~= 2**230
), and would take them (effectively) infinite time (2**252
scalar multiplications) to construct. Increasing the target size reduces the length of the list they have to store and search, but not the time it takes them to build it. As long as either value is effectively infinite, the vanity constraint doesn't reduce their search costs.
The size of their list remains effectively infinite until we apply a truly ridiculous constraint, like 40 matching characters (the total pubkey is only 43 characters long), which makes our L
about 2**200
, leaving about 2**52
possible keypairs in their list, which takes up maybe 128 terabytes. This is just maybe possible to store, but 1: it still takes them 2**252
time to build it, and 2: now it takes us 2**200
time to find a matching key. So our CPU limits become a protective factor.
(There are clever tricks to reduce some of these values, but in general the best they do is cost 2**126
time and 2**126
space, both of which are still basically infinite. They might be able to amortize the list-construction time over multiple attacks, but we're still talking infeasibly large costs.)
Now, the situation would be different if we were filtering on the private key. Imagine a system in which you picked random 256-bit symmetric keys, threw out all of them that didn't match a pretty 35-bit prefix (a 7 letter vanity string), and then published the hash of the key as a public identifier. Now the attacker knows that the input space is smaller, so they only have to search through 2**(256-35)
keys to find one that matches the public identifier. That reduces their work factor from 2**256
to 2**221
(still impossible, but a noticeable improvement). The other arguments still apply, but limiting the private key does have an impact on the time/cost of an attack, whereas I think limiting the public key does not.
(Incidentally, it would probably be a disaster to apply constraints to the private key in an elliptic curve system like Curve25519. I know that even a small bias to the nonce in ECDSA is enough to leak the private key, so I assume there are similar number-theoretic dangers to bias in the private key itself. Let's never build a wireguard-vanity-private-key
tool.)
Does that feel compelling at all?
cheers,
-Brian
That makes a lot of sense actually. Many thanks for the extremely thorough explanation!