Speed up installing CUDA on CI
Closed this issue · 6 comments
CUDA takes a long time to install on CI. We're using an existing github action. See if we can speed it up or do something custom to make CI faster.
Hmm, it seems this is now cached and is fast?
On this topic, I just finished building a docker image with CUDA and LLVM 7 baked into it. I have one made for ubuntu 20.04, and I've tested that it's able to build the project just fine. I'm going to see about publishing the image and using it for CI on my fork, hopefully it speeds that up some.
I'm going to do the same thing for a windows image, I think that'll let me figure out how to get some the environment variables in the windows runners working correctly too. The windows CI seems to be failing with some link errors that I don't run into locally.
I'm going to include the docker files in the repo, so that way people can rebuild the image locally or wherever they want
I did notice that when the CUDA install is cached, it goes much faster. I still think we can speed it up by baking in CUDA directly.
Awesome. FWIW I just pushed a change to main that lets us see the files in the installation location / CUDA_PATh dir. ON the winddows CI it looks fine, not sure why it isn't working (but I don't have windows and am still tracing through the code to understand here all the building and linking happens)
Using conda to install cuda in windows is much faster than downloading and installing the official installer, and could choose which package to install. In my case, I use pixi to install the minimal cuda package for compiling ptx files, and conda environment is easy to cache in CI. You can also use other conda package manager like miniconda or manba. But the drawback is you have to configure the cuda env variable by yourself.
This is my pixi config file.
[workspace]
channels = ["conda-forge", "nvidia"]
platforms = ["win-64", "linux-64"]
[tasks]
nvcc = "$CONDA_PREFIX/Library/bin/nvcc"
[dependencies]
cuda-cudart = "12.8.*"
cuda-nvcc = "12.8.*"I think this is fast enough for now.