robotology/idyntree

Wheels are not ABI compatible with C++11

diegoferigo opened this issue · 5 comments

I move the discussion about the ABI compatibility of the new wheels (#774) from robotology/gym-ignition#276.

Long story short. The standard manylinux2014 requires gcc4 that does not support the new C++11 ABI for std::string. This means that the libraries included in the wheel included in the exported CMake project cannot be consumed by dowstream projects that use the new ABI (nowadays, mostly all the code).

The subtle part is that downstream compilation works just fine, but then the loader will not find any symbol that has the old std::string signature.

There are quite a lot of issues out there, an interesting one is pytorch/pytorch#17492. Other interesting details are recapped in How we build Apache Arrow's manylinux wheels. A third one from tensorflow https://groups.google.com/a/tensorflow.org/g/build/c/Xo190xAPyW8?pli=1.

I'm not yet sure what is the best solution here. If supporting manylinux2014 is too problematic, maybe uploading wheels not compliant with it would work just fine for the majority of the users. cc @traversaro

I am not sure that passing _GLIBCXX_USE_CXX11_ABI=1 will work. The main problem is that even if you explicitly ask for iDynTree to compile with _GLIBCXX_USE_CXX11_ABI=1, the dependencies will still be compiled with _GLIBCXX_USE_CXX11_ABI=0, and so if any std::string-based API is used by iDynTree, then it will not link correctly with its dependencies. In this case so the C-based dependencies (libxml2), or C++ library that use char * in their API (assimp) are more convenient of C++ libraries that use std::string in their API (ipopt).

To be honest, I think pip/PyPI is a great deploy tool for pure Python users, while the use for further distributing C++ libraries is a bit fragile exactly for the reasons that emerge in those issues, so I think that whatever solution that works for gym_ignition is fine.

Yes, what I wrote yesterday downstream was the outcome of the first quick check of resources in the internet. I confirm that passing that option to the compiler is not an option. The compilation in CentOS uses a modern gcc, but then is auditwheel that repairs the wheel to be compatible with manylinux2014, and is this process that changes the ABI (I don't know many details about the process).

Changing that option would result to a wheel not compatible with manylinux*. It is as simple as that.

The best solution IMO is what we use also elsewhere: compile a linux_x86_64 wheel (in CentOS is fine) and then call auditwheel to repair it without changing the target platform. What it does is just vendoring in the wheel all the libraries, without altering the ABI compatibility. Afterwards, as we do elsewhere, in order to upload this linux_x86_64 wheel to PyPI, we have to change the name of the package so that it pretends to be manylinux*. PyPI does not do any other check.

As a final remark, this is necessary only because we vendor the entire CMake project and allow downstream projects to import it. If we were only providing the bindings, the process would be valid.

The final outcome to address the problem is described in #777 (comment).

TL;DR
Using CentOS is not an option, we build linux_x86_64 wheels on ubuntu without running auditwheel, therefore no dependencies are included.

As now the wheels are compatible with C++11, can we close the issue @diegoferigo ?

Sure. Closing.