/thereminq

ThereminQ - CLassiC : Orchestrating Qrack, ELK, Bonsai and Tipsy in OpenCL, VCL and CUDA

Primary LanguageShellGNU General Public License v3.0GPL-3.0

ThereminQ - CLassiC

Orchestrating Qrack, ELK, Bonsai and Tipsy in OpenCL, VCL and CUDA

ThereminQ

ThereminQ data demo #1


ThereminQ orchestrates a suite of best-of-class tools designed to control, extend and visualize data emanating to and from Quantum circuits using Qrack, ELK, Tipsy and Jupyter on CUDA and OpenCL accelerators.

Look also at the following images

Other tags contain

Images can be run independant but are also made to work with the vQbit infrastructure K8s HELM repo

Installation setup and usage scenarios can be glanced at here


Build on CUDA-CLuster stack

  • WebVNC, CUDA 11.4+ & OpenCL 1.2+ with NV, AMD & Intel HW support

Screenshot 2021-05-04 at 15 10 27

Initial vnc password is 00000000

  • noVNC website is avaliable at port 6080
  • xRDP running at port 3389 to vnc 127.0.0.1:5900

Docker: deploy an instance of ThereminQ's WebUI controller container

docker run --gpus all \
--privileged \ 
-p 6080:6080 \
--device=/dev/dri:/dev/dri \
--mount type=bind,source=/var/log/qrack,target=/var/log/qrack \
-d twobombs/thereminq:controller
docker run -ti \
--mount type=bind,source=/var/log/qrack,target=/var/log/qrack \
twobombs/thereminq bash /root/run-supreme-cpu
docker run --gpus all \
--device=/dev/dri:/dev/dri \
-ti \
--mount type=bind,source=/var/log/qrack,target=/var/log/qrack twobombs/thereminq bash /root/run-cosmos-gpu1
  • use --gpus all for NVidia-Docker hosts, in addition --privileged and --device=/dev/dri:/dev/dri will expose all GPUs in the system, eg: AMD/Intel iGPUs

All specialized workloads are listed here

Screenshot from 2021-10-24 17-23-18


Experimental bqp=bpp stabilizer_t_nn high qbit workloads

docker run --gpus all \
--device=/dev/dri:/dev/dri \
-ti \
--mount type=bind,source=/var/log/qrack,target=/var/log/qrack twobombs/thereminq bash /root/run-tnn-gpu1

Archived: Quantum Inspired Qimcifa high qbit workloads ( BigINT OpenCL - Shors' Monte Carlo on 18.04 )

docker run --gpus all \
--device=/dev/dri:/dev/dri \
-d twobombs/thereminq:qimcifa 

Screenshot from 2023-08-03 19-47-00


Tips for Managing high-Qubit and/or high-Node workloads

  • Workloads with full entanglement and/or Quantum simulations that are at or exceed 30+ Qubits
  • Mixed workloads based on longer/larger/deeper circuits with partial entanglement that exceed 36+ Qubits

To prevent these workload from taking up all resources of the system it is good to take the following measures

  • System memory should be at least the amount of RAM where the statevector will fit into
  • From 30 qubits ( eg: QFT ) count 8GB RAM Memory (and 4 cores with POCL) doubling every additional qubit
  • Start an instance with a limit for memory and/or swap. eg docker: '-m 16g --memory-swap 32g'
  • Disable OOM killers in the kernel and/or the container orchestrator ( --oom-kill-disable )
  • OOM host change: add vm.overcommit_memory = 1 and vm.oom-kill = 0 in /etc/sysctl.conf
  • Swap should be a dedicated and fast drive where possible NVMe RAID, equal to the bandwith of the GPU/PCIe


Sycamore, QFT & T_NN(-d) Results on an AMD Threadripper 1920X@4.2Ghz

  • 24 Threads with 32GB RAM, 2.5TB NVMe Swap on a 11x RAID NVME drive - Tesla K80 24GB - Tesla M40 24GB results

  • M40 + K80 run script

If you want to reproduce these results here are some gists to help you along

Note: it is wise to run the benchmarks program inside a main memory limited container with outflow to fast swap so that the system remains stable at intensive runs and high memory peaks - with fast swap we mean FAST NVME RAID0 swap


To run ThereminQ as a VirtualCL controller

note: create the underlying directory structure as mentioned in the VCL readme of Qrackmin

eg:

docker run \
--gpus all \
--device=/dev/dri:/dev/dri \
-d \
--mount type=bind,source=/var/log/vcl,target=/var/log/vcl \
--mount type=bind,source=/var/log/qrack,target=/var/log/qrack twobombs/thereminq:vcl-controller

Questions / Remarks / Ideas / Experiences - In Discord

Code from the following companies and initiatives are in this container

All rights and kudos belong to their respective owners.
If (your) code resides in this container image and you don't want that please let me know.

Code of conduct : Contributor Covenant