glimmerjs/glimmer-vm

Rendering performance regression between 0.84.3 and 0.85.1?

Opened this issue Β· 20 comments

In Discourse, and in Emberperf, we saw a fairly significant rendering-performance hit as part of the Ember 5.5 -> 5.6 bump:

Ember 5.6 included a bump of glimmer-vm from 0.84.3 to 0.85.1 (emberjs/ember.js#20561)

Unfortunately 0.84.3 -> 0.85.1 include a lot of structural changes in glimmer-vm, much of which was done without glimmer-vm's own performance testing in working order.

I was able to boot the glimmer-vm benchmark app on a handful of old commits, and run tachometer on them to compare the 'render' performance.measure metric.

commit Avg time
1441335 (v0.84.3) 29.19ms - 29.31ms
99fee7d (first merge of pnpm) 29.08ms - 29.21ms (no change)
664a746 ('restore performance tests' - after a bunch of refactoring) 1 39.14ms - 39.29ms
41f1528 (v0.86.0) 51.85ms - 52.01ms

These numbers are clearly going in the wrong direction. Although it is also worth mentioning: the benchmark app itself underwent a bunch of refactoring across these commits... so it might not be a perfect comparison.

I would love to be able to bisect into specific commits to identify what caused the regressions. Unfortunately, on all the intermediate commits I've tried, I've been unable to get the benchmark app to boot because of various import/dependency/package-json errors. It seems the 'perf.yml' GitHub CI job was disabled for much of this time, so I assume this was a known problem on these commits, and not a problem with my local setup.

So... I don't really know where that leaves us. Does anyone have any pointers for what else we can do to isolate the source of the regression(s)?

Footnotes

  1. with 56ddfa cherry picked on top to make the benchmark app work ↩

I took a stab at reverting some of the extra assertions & iterator changes introduced in 64eb186 (see davidtaylorhq#1). It does improve things by a couple of percentage points, but it's nothing compared to the 30-40% regression shown above 😒

It looks like a further rendering-speed regression has been released as part of Ember 5.10 😭

SCR-20240709-jquk

(https://emberperf.discourse.org)

Edit: although it looks like glimmer-vm was not bumped between Ember 5.9 and Ember 5.10, so I guess this must be caused by a change in Ember itself

For reference, the most recent upgrade pr (tho, this shipped in ember 5.9):

We went from 0.87.1 to 0.92.0

Likely suspects:

Also, a few deprecations were added (to the AST utils). I wonder how much that code being present (extra branches, etc) contributes to slowing down - especially since ember has a bunch of extra transformations it uses

@davidtaylorhq are those benches all with classic builds? I'm curious how using vite could affect the resulting score. As i look through the changelog in ember, the only things that isn't add deprecation, or delete old code, are changes supporting vite's strictness.

@NullVoxPopuli I opened an ember.js issue at emberjs/ember.js#20719 with more details on the most recent regression. It looks like the culprit is emberjs/ember.js@53b2de8.

emberperf uses classic builds, and is pretty dependent on AMD resolution. So I think it'll need some pretty significant refactoring to work under Vite/Embroider.

Also, a few deprecations were added (to the AST utils). I wonder how much that code being present (extra branches, etc) contributes to slowing down - especially since ember has a bunch of extra transformations it uses

It doesn’t unless discourse/the test suite used here loads the template compiler at runtime which is atypical

Yeah, both Discourse and the emberperf test suite compile templates in advance πŸ‘

ef4 commented

emberperf does load the template compiler into the browser, although at first glance it does it on a completely separate pageload from the one that measures rendering.

I bring it up because there's definitely atypical stuff in there, but so far nothing I can see that would skew the results.

This issue might be releated - such a massive bump in bundle size could lead to a big slowdown in performance.

Mixed news so far

  • πŸŽ‰ I've converted the codebase to be fully "normal", using only public APIs, and templates are compiled at build time: https://github.com/NullVoxPopuli/ember-performance/
  • πŸŽ‰ I've been able to reproduce the regression (in development mode)
  • πŸ€” The numbers / hz are way off from the original benchmark, so I'm not entirely sure if I am measuring the beginning and end of a test-run correctly. I'm marking the end of a test-run via schedule('afterRender'...), which might be sufficient? I did try requestAnimationFrame and requestIdleCallback, but those two timings are way slower than our event loop can run. (requestIdleCallback was about 1/2 my monitor's refresh rate, and requestAnimationFrame (as you might expect) slowed the hz to my monitor's refresh rate).
    I did do bad science here as I changed how the measuring is done with the rendering benches. I'm using tinybench for all benches now, which is great, but things are running faster than expected it seems like. If anyone wants to take a poke at the benchmarking code for rendering, that's here, and feedback would be most welcome.
  • πŸŽ‰ no more custom benchmarking / measurement code
  • πŸ€” the main thread is kept so busy that the browser doesn't actually have time to render anything. I think this is probably fine, as we're showing / hiding components faster than the refresh rate.

I've published the two apps here:


Second update, after fixing some things:

  • using renderSettled to determine when rendering is done rather than runloop's schedule 'afterRender')
  • I made a big mistake in how I was resetting values, so I've deleted the screenshots from the previous post to not confuse folks.
  • the hz / operations per second values make much more sense now.
  • the chart is now taller, so it's easier to see variation

We have a similar degradation in both development and production modes:

Development

image

Production

image

Just added canary / 5.12
(these are production results)
image

Pixel 6a

Chrome
Screenshot_20240809-211510

FireFox
Screenshot_20240809-211530

Did a memory allocation timeline and the graph looked like this:
image

which aligns with the work from @bendemboski in #1440
which was released in 5.7: https://github.com/emberjs/ember.js/releases/tag/v5.7.0

Nice work, @bendemboski !

So far:

With no other changes, if I just use glimmer's prod assets as dev we get a nice speed boost in both production and development environments in the ember apps
image

I've added 5.11 and 6.0-alpha.1
And did a 6x CPU slowdown to try to account for random machine variance
https://ember-performance-testing-prod.pages.dev/report?benchmarks=%5B%22Render%20complex%20html%20(%40glimmer%2Fcomponent)%22%5D&clear=0&emberVersions=%5B%223.28%22%2C%224.0%22%2C%225.4%22%2C%225.5%22%2C%225.6%22%2C%225.7%22%2C%225.8%22%2C%225.9%22%2C%225.10%22%2C%225.11%22%2C%22ember-canary%22%2C%22ember-canary-custom%22%5D&timePerTest=3000

Run 1, Random

image

Run 2, reverse-serial, no CPU throttle

image

Run 3, Random, no CPU throttle

image

As you can see, there is still some variance, as there isn't really a lot that changed

emberjs/ember.js@3dfb8a4...85a4f29

(but some logic around EXTEND_PROTOTYPES.Array did change).

3dfb8a4 is the actual v6 alpha.1 sha

Broccoli
image

Embroider (w/ 20 (I think) x CPU slowdown because I have a lot of machine "noise")
image

From this PR: #1606