Make time estimate and scoring thresholds configurable
camerondm9 opened this issue · 2 comments
Problem
Hardware continues to become more powerful for less money, and it has been 11 years since zxcvbn was originally released (2012). I think the thresholds used to convert a number of guesses into a time estimate and score should be increased to reflect how powerful consumer hardware has become.
Some examples using hashcat:
My laptop (GPU: GTX 1650 Ti) can test 4 billion salted SHA1 hashes/second, or 1.4 million PBKDF2(999 iterations) hashes/second.
A single RTX 4090 can test 50 billion salted SHA1 hashes/second, or 19 million PBKDF2(999 iterations) hashes/second.
These are both weak hashes, which means we should use the offlineFastHashing
time estimate and require passwords to have a score of 4. However, offlineFastHashing
assumes a maximum guess rate of 1e10 guesses/second, and a single RTX4090 can exceed that! This threshold should probably be increased by a factor of ~100 to restore the properties it had when zxcvbn was originally released:
One of the newest GPUs in 2012 was the GTX 680, which could test 500 million salted SHA1 hashes/second.
The Radeon HD 7970 (also 2012) could do 2.8 million salted SHA1 hashes/second.
Moving to stronger hashes:
My GTX 1650 Ti can test 500 scrypt hashes/second, but a single RTX 4090 can test 7000 scrypt hashes/second.
This is close to the offlineSlowHashing
threshold of 1e4 guesses/second, but a cluster of RTX 4090s (common, if cracking passwords) would be proportionally faster. Therefore, the offlineSlowHashing
threshold should be increased too, maybe by a factor of ~10.
GTX 1650 Ti tested locally. RTX 4090 benchmark results here. GTX 680 benchmark results here. Radeon HD 7970 results here.
Conclusion
- I suggest the thresholds for determining the time estimates and scores be made configurable, so that they can be changed to keep up with increasing hardware performance. The offline-attack thresholds are most important to change, but making all thresholds configurable would be nice for consistency.
- Changing the defaults wouldn't be a bad idea either, but would be a breaking change for anyone who is relying on the scores matching between client and server. This might be ok if the thresholds are configurable, because they could just override them to the old values (or values that match the server, if different).
Hey thank you for your concern. I think i saw this kind of issue in another zxcvbn repository 🤔
Most of your assumption are correct. The hardware is getting faster and faster so sha1 is just useless.
But as stated in the documentation for the return value https://zxcvbn-ts.github.io/zxcvbn/guide/getting-started/#output
3 # safely unguessable: moderate protection from offline slow-hash scenario. (guesses < 10^10)
4 # very unguessable: strong protection from offline slow-hash scenario. (guesses >= 10^10)
This scoring is used for slow-hashes like bcrypt and if i understand the benchmark results for the RTX 4090 the hashes per second are still lower for something like bcrypt.
BUT i think i can make it customizable as it shouldn't be that hard and developer who really want to have STRONG passwords for their applications are pleased to have this kind of customization.
So yes i will add the customization in the next few days 👍
There is just a little concern. This is javascript and the highest number could be 1e308
🤔 Above that it will be +Infinity
Thanks for the quick response!
It looks to me like a single RTX 4090 can test 180000 bcrypt hashes/second, which exceeds the offlineSlowHashing
threshold (1e4) by a factor of ~18. This is much slower than SHA1, but still significantly faster than zxcvbn is expecting, which means the time estimates are too short.
Making the thresholds customizable would be ideal! 👍
I don't think the JavaScript Number.MAX_VALUE
will be a problem. That number is still so much bigger than what we're working with here.