quantumlib/qsim

Errors after make: not enough memory

ochapman-dphil opened this issue · 2 comments

After running make and pip install . all seemed well.

However upon testing the error message not enough memory: is the number of qubits too large? always appears. This occurs when the qsimcirq_test.py::test_cirq_qsim_gpu_amplitudes test is run and in general when setting qsimcirq.QSimOptions(use_gpu=True). This occurs regardless of whether gpu_mode is set to 0 or 1.

My system should have ample memory available. Are there any likely fixes?

In this context, the error message means that cudaMalloc fails somehow. Certainly, there should be enough memory on your GPU for just two qubits in qsimcirq_test.py::test_cirq_qsim_gpu_amplitudes. Could you add ErrorCheck(rc); after line 85 in lib/vectorspace_cuda.h (and perhaps add #include "util_cuda.h" at the beginning of that file after line 22) to see the actual CUDA error message?

The error message gives CUDA error: the provided PTX was compiled with an unsupported toolchain. /tmp/pip-req-build-8_972xno/pybind_interface/custatevec/../../lib/vectorspace_cuda.h 89

I assume this means that the wrong version of CUDA was used to compile the code. I've discovered that nvidia-smi returns CUDA Version: 11.2 whereas nvcc --version returns Cuda compilation tools, release 11.4, V11.4.100 Build cuda_11.4.r11.4/compiler.30188945_0