rust-lang/rust

codegen-units + ThinLTO is not as good as codegen-units = 1

Opened this issue · 14 comments

We recently had a fair amount of reports about code generation quality drop. One of the recent causes for the quality drop is the enablement of codegen-units and ThinLTO.

It seems that ThinLTO is not capable of producing results matching those obtained by compiling without codegen-units in the first place.

The list of known reports follows:

Improvements to ThinLTO quality are inbound with the soon-to-happen LLVM upgrade(s), however those do not help sufficiently, it would be nice to figure out why ThinLTO is not doing good enough job.

cc @alexcrichton @nikomatsakis

Thanks for filing this @nagisa =)

Indeed thanks! I'll try to take a closer look at this when we've upgraded LLVM

By the way there is a great talk about how thinlto is designed here: https://www.youtube.com/watch?v=p9nH2vZ2mNo in case people are curious. :)

Matrix multiplication is slower with thinlto + multiple codege-units using https://github.com/bluss/matrixmultiply .

I can create a minimal example if needed.

I've always thought that there should be another Cargo profile, something like:

# The publish profile, used for `cargo build --publish`.
[profile.publish]
# (...) everything else the same as profile.release except:
lto = true        # Enable full link-time optimization.
codegen-units = 1 # Use only 1 codegen-unit to enable full optimizations.

Because I feel like there should be a distinction between release builds the developer compiles on their local machine during development (not debug builds, but "fast" release builds) and truly publishable builds (like, for example the version of Firefox that is released for public consumption) in which case sacrificing build time once is more acceptable.

I realize the status-quo for C/C++ is to also not enable LTO by default, but it just seems strange to me to have to opt into these kinds of performance enhancements when the cost (for published binaries) is a one-time compile time cost.

I think "publish" is uncomfortably close to "release". But I could get behind a "debug/optimize/release" terminology proposal.

Historically I've used debug, internal, release, retail.

Plus a few variations with "add-ons" such as "Retail-Logging" or "Retail-Instrumented".

For Rust instead of 'Retail' I'd propose MaxSpeed. Whatever it's called, a profile with lto=true and codegen-units=1 is definitely a good idea!

brson commented

@johnthagen I agree that today's 'release' profile seems to have two use cases that want different configurations. Is it possible to create custom cargo profiles? Is there an upstream cargo issue for this?

@brson It looks like it's not yet implemented, but it is has been discussed for several years.

Perhaps @matklad has some more up-to-date information on this?

My understanding is that "custom profiles" are pretty far-away at this moment (we need to do profile overrides first), however we do have config profiles nightly features, which allows overriding profile via .cargo/config. This might be used, for example, to specify codegen-units=1 on the build-server which produces release artifacts.

brson commented

Thanks @johnthagen @matklad for the leads!

Visiting for T-compiler backlog bonanza, since it was tagged as C-tracking-issue (perhaps erroneously)

  • I think at this point the disparity between codegen-units + ThinLTO vs codegen-units=1 is, to some degree, something that we are accepting as a "fact of life"
  • We do have a problem in that people are surprising that --release does not produce the most optimal code possible while benchmarking. But, again, assuming that the aforementioned disparity is "fact of life", we then would have to address the "--release surprise" via other means; we cannot make --release imply codegen-units=1 without severely regressing compilation performance for many users.
  • It would probably be good if we had benchmark data that tracking the disparity between codegen-units+ThinLTO vs codegen-units=1, just so we have some idea of how big the problem is, and whether it is getting better or worse
  • It would also be good to have official documentation on how to tune your settings for "best object performance" vs "usable compilation times"

@rustbot label: -C-tracking-issue

Also, given that the original point of the issue was to determine why ThinLTO didn't seem to do a good enough job, that seems like a question that is well-suited for wg-llvm.

@rustbot label: A-llvm