Low Keccak Hash Performance
holgerd77 opened this issue · 3 comments
Hi Paul,
I am just testing out the Noble libraries to analyze for an eventual breaking release integration (through ethereum-cryptography
).
On the hash function (keccak) I am seeing some serious performance degradation, which would be significant for us e.g. for the Trie + VM use case.
So here are some numbers I get:
The old keccak
package using the Node bindings:
import { keccak224, keccak384, keccak256 as k256, keccak512 } from 'ethereum-cryptography/keccak'
console.time('p'); for (let i=1; i <= 100000; i++) { k256(Buffer.from([1,2,3])) }; console.timeEnd('p');
// p: 274.337ms
The old keccak
packages using native JS (by manually switching to it in the index.js
file):
console.log('k2')
module.exports = require('./js')
import { keccak224, keccak384, keccak256 as k256, keccak512 } from 'ethereum-cryptography/keccak'
console.time('p'); for (let i=1; i <= 100000; i++) { k256(Buffer.from([1,2,3])) }; console.timeEnd('p');
// p: 199.906ms
(astonishingly even faster, Apple MacBook Air M1)
The noble/hashes
library:
import * as sha3_ from "@noble/hashes/sha3"
console.time('p'); for (let i=1; i <= 100000; i++) { sha3_.keccak_256(Uint8Array.from([1, 2, 3])) }; console.timeEnd('p');
// p: 568.131ms
So this would be roughly a 2x decrease and which would make it a hard decision for us to do a switch here, weighting the obvious benefits against this, since performance is also an extremely important factor for us, since we are a lot of hashing operations in the Trie library.
Am I doing everything correct here on the measuring side? I've also seen this README entry on performance, but can't easily put this into context and compare this with the existing library we are using right now.
Thanks a lot for the great work done here! 🙂
As you can see, the README mentions following numbers on M1:
SHA3-256, keccak256, shake256 32B x 184,026 ops/sec @ 5μs/op
Kangaroo12
Which is about the same as in your test. Specifically, your benchmark does 176K which is pretty close.
Now, my question is: could you provide some actual real-world benchmarks for trie and VM? I assume you have some. Because for all in-browser cases, 60FPS rendering requires one frame per 16.6 ms. Which leaves us 2800 invocations of keccak per one frame before the frame gets skipped. E.g 2800 addresses calculated per one frame. That seems fast enough. Isn't it?
Clarification: i'm sure VM uses keccak heavily.
However, you need to compare bare BigInt rewrite of VM to an old version with bn.js. I don't think the difference would be that big.
Just spoke with @alcuadrado.
@holgerd77 we can do 3.5x of the current performance easily, in fact, I just updated the test/benchmarks/README. To do it, replace keccakP
with one from the gist.
However, i'd only do this in exceptional cases. So, let's wait for those benchmarks and we'll see. This breaks auditability/readability.
Seems like ethereumjs vm is 50%? faster now than it was before, so closing.