Allow torch tensors and cupy arrays as input to quantum kernels
zohimchandani opened this issue · 3 comments
Required prerequisites
- Consult the security policy. If reporting a security vulnerability, do not report the bug using this form. Use the process described in the policy to report the issue.
- Make sure you've read the documentation. Your issue may be addressed there.
- Search the issue tracker to verify that this hasn't already been reported. +1 or comment there if it has.
- If possible, make a PR with a failing test to give us a starting point to work on!
Describe the bug
Pytorch tensors and cupy arrays lie on GPU memory and we need to be able to input them into quantum kernels from GPU. Can we please add support for these?
The code snippet below works fine for numpy:
import cudaq
from cudaq import spin
import numpy as np
n_samples = 5
n_params = 2
params = np.random.rand(n_samples, n_params)
@cudaq.kernel
def kernel(params: np.ndarray):
qvector = cudaq.qvector(1)
rx(params[0], qvector[0])
ry(params[1], qvector[0])
result = cudaq.observe(kernel, spin.z(0), params)
result
It does not work for torch.Tensor
inputs:
import cudaq
from cudaq import spin
import torch
n_samples = 5
n_params = 2
params = torch.rand(n_samples, n_params)
@cudaq.kernel
def kernel(params: torch.Tensor):
qvector = cudaq.qvector(1)
rx(params[0], qvector[0])
ry(params[1], qvector[0])
result = cudaq.observe(kernel, spin.z(0), params)
result
CompilerError: 792851843.py:12: error: torch is not a supported type.
(offending source -> torch.Tensor)
and a similar error is shown for cupy arrays:
import cudaq
from cudaq import spin
import cupy as cp
n_samples = 5
n_params = 2
params = cp.random.rand(n_samples, n_params)
@cudaq.kernel
def kernel(params: cp.ndarray):
qvector = cudaq.qvector(1)
rx(params[0], qvector[0])
ry(params[1], qvector[0])
result = cudaq.observe(kernel, spin.z(0), params)
CompilerError: 1207908829.py:11: error: cp is not a supported type.
(offending source -> cp.ndarray)
Steps to reproduce the bug
NA
Expected behavior
NA
Is this a regression? If it is, put the last known working version (or commit) here.
Not a regression
Environment
- CUDA Quantum version:
- Python version:
- C++ compiler:
- Operating system:
Suggestions
No response
I have a tensor that lives on GPU memory.
I want the ability to access that pointer and pass it to the observe
call without GPU-CPU memory transfer.
Something like this:
import torch
import cudaq
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# This tensor lives on gpu memory
thetas = torch.Tensor([1,2,3]).to(device)
cudaq.observe(kernel, thetas)
We should also support doing this for cupy arrays which live on GPU memory
import cupy as cp
x = cp.random.rand(4)
x.device #This lives on GPU memory
cudaq.observe(kernel, thetas)
UPDATE:
It would also be nice to have the ability to access the output of cudaq.observe()
which lives on GPU memory to be an input to a Pytorch function without CPU-GPU memory transfer.
Same here. I want to move result from the pytorch minimization on GPU to observe call on GPU in cudaq
import torch
from torchmin import minimize
import cudaq
spin_ham=.....
init_params=torch.from_numpy(init_params)
@cudaq.kernel
def main_kernel(nelec:int, qubits_num:int, thetas: torch.tensor):
qubits=cudaq.qvector(qubits_num)
for i in range(nelec):
x(qubits[i])
cudaq.kernels.uccsd(qubits, thetas, nelec, qubits_num)
def objective_func(parameter_vector):
cost = cudaq.observe(main_kernel, spin_ham, nelectrons, qubits_num, parameter_vector).expectation()
return cost
result_vqe=minimize(objective_func,init_params,method='l-bfgs')