quantumlib/qsim

Question about multi-GPU support

jakurzak opened this issue · 3 comments

Is there multi-GPU support in Qsim or there any plans to implement it?
Is there any intention to implement distributed memory capabilities?
What can I do when the number of qubits is such that the memory of a single GPU is exceeded?
Any feedback appreciated!

There is no native multi-GPU support in qsim and there are no plans to implement it. Note that there is multi-GPU support in qsimcirq via NVIDIA cuQuantum Appliance.

Sorry for a naive question, but what about the technique described here: https://quantumai.google/qsim/tutorials/qsimcirq?authuser=1#advanced_applications_distributed_execution

What about the technique described here: {...}

The "circuit cutting" technique applied by qsimh is equivalent to running multiple completely separate simulations and combining the final results. No communication takes place during circuit execution. Note also that qsimh is fairly out of date - it does not have a Cirq interface or support running on GPU (in contrast with qsim.)

Depending on the exact circuit you want to simulate, you might still be able to use this technique to distribute the work and avoid exceeding GPU memory limits. However, you would need to manually construct the cut circuits and feed them into qsim (not qsimh) for simulation.