WebReflection/usignal

Update benchmark numbers for @preact/signals-core

Closed this issue ยท 11 comments

We've just released a new version of @preact/signals-core where we reworked the internals. With the new version the numbers don't reflect the measurements in the readme anymore. These are the results I get on my machine (M1 Pro) with a fresh clone from usignal.

Screenshot 2022-09-21 at 18 15 04

gosh ... I've put this bench up like minutes ago ... and that's a plot-twist !!! will run again and update the image

uhm ... that doesn't sound right folks ... are you ditching subscriptions here? 'cause if I do, this is the result

Screenshot from 2022-09-21 18-24-20

P.S. I'm not blaming your release, just trying to understand if we're comparing apples to oranges here ... if we're not, I might drop preact from the equation until I am sure I can compete with the subscription ditching logic (but congrats on the release if it doesn't break anything else, and it's ready to fully support nested effects!)

edit just tested, it doesn't support nested effect removal ๐Ÿ˜ข

P.S. I've created a ditching-subscriptions branch which passes the test and smokes 'em all.

Screenshot from 2022-09-21 19-57-00

No, we're not ditching subscriptions. We're only setting them up when needed.

I'm unable to reproduce the results you have on my end:

Screenshot 2022-09-21 at 20 07 25

Let me know if there are more steps needed to run the benchmarks for the ditching-subscriptions branch.

We're only setting them up when needed.

what does it mean? how can you know if a computed has conditional logic in it on a second run?

I'm unable to reproduce the results you have on my end:

yeah, that was my bad, I've pushed the branch without changes in it ... please try again with the same branch, and npm run benchmark

P.S. I am planning to change the benchmark to use conditional logic inside, and have predictable results to verify correctness on a second run too (currently all computed just have same signals repeated times).

maybe that can show off some extra timing, or correctness, to consider ...

meanwhile ... the benchmark doesn't look for correctness, it just looks for known results ... we should review benchmarks before using these ...

edit bug filed maverick-js/signals#9

OK, I've added the new screenshot after changing the demo to consider conditional subscriptions and indeed preact is correct.

I've updated the image in the README, I hope this is OK, and fair ... but I am also extremely curious to know how you managed to achieve these performance skipping subscriptions with some fine grain control.

If you could share a bit what happens there, it'd be awesome, otherwise I need to dig into your code.

Closing, unless you think the benchmark is still not correct.

benchmark

Thanks for updating the benchmark results. The main PR that made those numbers achievable, was this one: https://github.com/preactjs/signals/pull/161/files