How did we enable c++17?
hmaarrfk opened this issue · 7 comments
Comment:
I'm trying to rebuild Tensorflow 2.9.1 with the new abseil pinnings. (wow, look how that conversation is circling back to bite me!)
Did we do anything to force c++17 in the tensorflow 2.10 build? Nothing really jumps out immediately.
Is there a flag that I could set to enable C++17 on TF 2.9 (and get past the linking error with ABSEIL, I understand I'll likely have alot to patch).
I believe that the latest 2.9 commit we have is
Thansk for your help!
It may be that it just "happend" in 2.10 synchronously with our migration from abseil C++11 and c++17
I found this commit which may be helpful.
It may be that it just "happened" in 2.10 synchronously with our migration from abseil C++11 and c++17
It was a necessity for using the new abseil packages, yes (arguably, it was a necessity already before, but on unix the abseil ABI is a bit less divergent between C++11 and C++17, so people basically didn't notice).
So by default, I'd strongly recommend to backport whatever's necessary to get C++17 compilation to run through. However, if it's a huge pain, you can drop back to what was the status quo before and 🤞
I seem to be running into a few lingering about absl ... it may be that I just give up on this venture and use pip while i need to :/
Did we do anything to force c++17 in the tensorflow 2.10 build? Nothing really jumps out immediately.
Is there a flag that I could set to enable C++17 on TF 2.9 (and get past the linking error with ABSEIL, I understand I'll likely have alot to patch).
There were a few things I edited in the custom_toolchain, see this commit: 47f9189
On the upstream front, I would look for commits by this person whose PR I had to patch to make 2.9.1 work for osx, so may be relevant: https://github.com/ngam/tensorflow-feedstock/blob/0dbafbcd40180f67075341cf22d3efd34f0dace9/recipe/patches/fix-absl-stuff.patch
Hmm. Truthfully. My time ran out to recompile a past version of tensorflow.
Something increased in TF memory usage between 2.9.0 and 2.10.0 which made us stop being able to run our models on certain memory limited GPUs.
I "fixed" the problem by decreasing the batch size.
Thank you all for your replies!