NVIDIA/cuda-quantum

Add libc++ support to CUDA-Q

Opened this issue · 0 comments

Motivation

It is desirable for a variety of reasons to distribute a full C++ toolchain for Quantum applications. Currently, CUDA-Q includes the necessary compiler tools as well as quantum specific runtime libraries, but does not include the C++ standard libraries it relies on. The CUDA-Q compiler support (only) a specific C++ standard library and makes use of fairly new C++ and LLVM features. The fact that CUDA-Q is compiled with a different compiler than the libraries and executables it produces complicates building compiler extensions, runtime plugins, and CUDA-Q libraries that work across platforms. These compatibility challenges extend to Python packages, where the same compiler/STL version must be used to generate bindings for two Python modules to be compatible.

Building CUDA-Q with its own C++ runtime libraries allows much more flexibility and eliminates compatibility issues for compiler extensions, runtime plugins, and CUDA-Q libraries with generated Python bindings. Additionally, available C++ packages vary across operating systems, and it is hence necessary in any case to have a side-by-side installation of the CUDA-Q tools and other toolchains that does not interfere with other system configurations. Including the necessary C++ support with CUDA-Q will take care of that.

Considerations

  • The GNU toolchain is widely used across Linux distributions. However, the GNU toolchain relies on GNU specific extensions for C and C++ and is not supported on all OS that CUDA-Q may want to support.
  • Clang/LLVM invests heavily into GCC compatibility for all its components. For example, LLVM supports most GNU specific extensions, has more or less recently added the llvm-libgcc project to facilitate migration to the LLVM toolchain for OS maintainers, and there is work in progress to build GNU system libraries with Clang.

See furthermore

Proposed Change

  • Build and include the necessary C++ runtime libraries along with the compiler with the CUDA-Q installation, specifically: libc++, libc++abi, unwind, compiler-rt, and openmp.
  • Distribute both shared and static runtime libraries, where static ones are self-contained and hermetically closed
  • Keep relying on the OS/OS package manger to provide the necessary C libraries (runtime libraries and headers)
  • Configure the CUDA-Q toolchain to use the included libraries by default
  • Make sure CUDA-Q binaries can be used from host code compiled with libstdc++ (via a C-like interface). Conversely, some plugins within CUDA-Q (simulators) will be built with CUDA/libstdc++ and are called from CUDA-Q - that all works fine, as long as static libraries are available or shared libraries do not have any indirect libstdc++ dependencies.

I propose to give this a try with the installer. For now, we would keep building and testing also with libstdc++, and Python wheels will continue to be built as is for each (Linux) platform. While libc++ seems to be doing a good job hiding symbols with its hermetic build, we need to be careful about multiple openmp libraries just like other major packages (e.g. pytorch, scikit-learn) - this is irrespective of this change.

Implications

Right now, it is possible to have CUDA kernel code (CUDA specific syntax) and CUDA-Q API calls in the same source file, provided the same host compiler is used than was also used to compile CUDA-Q. This is impractical even with the current compilation against libstdc++, since it is desirable to compile CUDA-Q with a fairly new compiler to leverage new compiler and language features. Building CUDA-Q against libc++ instead will require splitting out CUDA kernels and any CUDA-Q calls (kernels or API calls) into separate source files and connecting them via a C-like API during linking. The same holds for other libraries that are not compiled from source/built against another C++ standard library and make use of C++ implementation specific data structures in the called API.