Profile-Guided Optimization (PGO) benchmark report
zamazan4ik opened this issue · 2 comments
Hi!
I decided to test the Profile-Guided Optimization (PGO) technique to optimize the library performance. For reference, results for other projects are available at https://github.com/zamazan4ik/awesome-pgo . Since PGO has helped many different libraries, I decided to apply it to cel-rust
to see if a performance win (or loss) can be achieved. Here are my benchmark results.
This information can be interesting for anyone who wants to achieve more performance with the library in their use cases.
Test environment
- Fedora 40
- Linux kernel 6.10.11
- AMD Ryzen 9 5900x
- 48 Gib RAM
- SSD Samsung 980 Pro 2 Tib
- Compiler - Rustc 1.81.0
cel-rust
version:master
branch,a5c6c2dbb658b13acf69f7b96c313288ae81d29b
commit- Disabled Turbo boost
Benchmark
For PGO optimization I use cargo-pgo tool. Release bench results I got with taskset -c 0 cargo bench
command. The PGO training phase is done with taskset -c 0 cargo pgo bench
, PGO optimization phase - with taskset -c 0 cargo pgo optimize bench
.
taskset -c 0
is used to reduce the OS scheduler's influence on the results. All measurements are done on the same machine, with the same background "noise" (as much as I can guarantee).
Results
I got the following results:
- Release: https://gist.github.com/zamazan4ik/da5d6b38819a2d96aabbd7a350d998a0
- PGO optimized compared to Release: https://gist.github.com/zamazan4ik/b900c1c640cf4d86f4dab96750bd2c24
- (just for reference) PGO instrumented compared to Release: https://gist.github.com/zamazan4ik/c7395c011e81b7efd4bd7fb41e0f5348
According to the results, PGO measurably improves the library's performance.
Further steps
At the very least, the library's users can find this performance report and decide to enable PGO for their applications if they care about the library's performance in their workloads. Maybe a small note somewhere in the documentation (the README file?) will be enough to raise awareness about this possible performance improvement.
Please don't treat the issue like an actual issue - it's just a benchmark report (since Discussions are disabled for the repo).
Thank you.
Perhaps I'm misreading the benchmarks but I see "Performance has regressed" in almost all cases when looking at your comparison between PGO and default. How should I interpret these results?
Perhaps I'm misreading the benchmarks but I see "Performance has regressed" in almost all cases when looking at your comparison between PGO and default. How should I interpret these results?
Yeah, I need to explain a bit. You need to read the "PGO optimized to Release" results - these are the results after applying PGO optimization compared to the Release. "PGO instrumented compared to Release" are shown just for reference - these are the results from the PGO training phase.
PGO is a two-step process:
- Collect runtime metrics with PGO instrumentation
- Use the collected metrics during PGO optimization
Since collecting metrics in runtime has some runtime overhead - that's during the instrumentation phase performance is regressed. However, I show this information just for estimation of how performance can regress during the training phase (can be important for someone who wants to perform PGO instrumentation directly in the prod environment)