manylinux docker images featuring an installation of the NVIDIA CUDA compiler, runtime and development libraries, and the NVIDIA graphic driver, designed specifically for building Python wheels with a C++/CUDA backend.
Obtain the docker images from Dockerhub for the following CUDA versions:
manylinux_2_28 on X86_64 arch with CUDA 12.3 (see on Dockerhub)
docker pull sameli/manylinux_2_28_x86_64_cuda_12.3
manylinux2014 on X86_64 arch with CUDA 12.3 (see on Dockerhub)
docker pull sameli/manylinux2014_x86_64_cuda_12.3
manylinux2014 on X86_64 arch with CUDA 12.0 (see on Dockerhub)
docker pull sameli/manylinux2014_x86_64_cuda_12.0
manylinux2014 on X86_64 arch with CUDA 11.8 (see on Dockerhub)
docker pull sameli/manylinux2014_x86_64_cuda_11.8
manylinux2014 on X86_64 arch with CUDA 10.2 (see on Dockerhub)
docker pull sameli/manylinux2014_x86_64_cuda_10.2
manylinux_2_28 on AARCH64 arch with CUDA 12.3 (see on Dockerhub)
docker pull sameli/manylinux_2_28_x86_64_cuda_12.3
manylinux2014 on AARCH64 arch with CUDA 12.3 (see on Dockerhub)
docker pull sameli/manylinux2014_x86_64_cuda_12.3
The docker images were built based on the following images:
- manylinux_2_28 on X86_64 architecture is based on: quay.io/pypa/manylinux_2_28_x86_64
- manylinux_2_28 on AARCH64 architecture is based on: quay.io/pypa/manylinux_2_28_aarch64
- manylinux2014 on X86_64 architecture is based on: quay.io/pypa/manylinux2014_x86_64
- manylinux2014 on AARCH64 architecture is based on: quay.io/pypa/manylinux2014_aarch64
To maintain a minimal Docker image size, only the essential compilers and libraries from CUDA Toolkit are included. These include:
- CUDA compiler:
cuda-crt
,cuda-cuobjdump
,cuda-cuxxfilt
,cuda-nvcc
,cuda-nvprune
,cuda-nvvm
,cuda-cudart
,cuda-nvrtc
,cuda-opencl
, - CUDA libraries:
libcublas
,libcufft
,libcufile
,libcurand
,libcusolver
,libcusparse
,libnpp
,libnvjitlink
,libnvjpeg
- CUDA development libraries:
cuda-cccl
,cuda-cudart-devel
,cuda-driver-devel
,cuda-nvrtc-devel
,cuda-opencl-devel
,cuda-profiler-api
,libcublas-devel
,libcufft-devel
,libcufile-devel
,libcurand-devel
,libcusolver-devel
,libcusparse-devel
,libnpp-devel
,libnvjitlink-devel
,libnvjpeg-devel
- NVIDIA driver:
nvidia-driver:latest-dkms
(see note below 1)
If you need additional packages from CUDA toolkit to be included in the images, please feel free to create a GitHub issue.
The following environment variables are defined:
PATH=/usr/local/cuda/bin:${PATH}
LD_LIBRARY_PATH=/usr/local/cuda/lib64:${LD_LIBRARY_PATH}
CUDA_HOME=/usr/local/cuda
CUDA_ROOT=/usr/local/cuda
CUDA_PATH=/usr/local/cuda
CUDADIR=/usr/local/cuda
Run containers in interactive mode by:
docker run -it sameli/manylinux_2_28_x86_64_cuda_12.3
The nvcc
executable is available on the PATH
. To check the CUDA version, execute:
docker run -t sameli/manylinux_2_28_x86_64_cuda_12.3 nvcc --version
The output of the above command is:
Copyright (c) 2005-2022 NVIDIA Corporation Built on Mon_Oct_24_19:12:58_PDT_2022 Cuda compilation tools, release 12.0, V12.0.76 Build cuda_12.3.r12.0/compiler.31968024_0
When running the docker containers in Github action, you may encounter this error:
no space left on device.
To resolve this, try clearing the Github's runner cache before executing the docker container:
- name: Clear Cache run: rm -rf /opt/hostedtoolcache
To request a docker image for a specific CUDA version or architecture, feel free to create a GitHub issue.