NVIDIA/cuda-quantum

In Python version, qubits allocated within a conditional do not appear localized to the kernel

rtvuser1 opened this issue · 0 comments

Required prerequisites

  • Consult the security policy. If reporting a security vulnerability, do not report the bug using this form. Use the process described in the policy to report the issue.
  • Make sure you've read the documentation. Your issue may be addressed there.
  • Search the issue tracker to verify that this hasn't already been reported. +1 or comment there if it has.
  • If possible, make a PR with a failing test to give us a starting point to work on!

Describe the bug

In Python version, qubits allocated within a conditional do not appear to be localized to the kernel. The qubits for each new kernel appear to be added to the end of the prior kernels qubits.
WORKAROUND: If no conditional is used to control the allocation, then the qubits do appear to be local to the kernel.

Steps to reproduce the bug

#######################################################

# PYTHON KERNEL - BUG WITH KERNEL INITIALIZATION

# if qubit allocation performend with a conditional, 
# execution of kernel multiple times does not release previous qubits

import cudaq
            
@cudaq.kernel
def kernel_test(num_qubits: int, method: int = 1):
   
    ### BUG: This approach fails to release qubits after execution
    if method == 1:
            
        # Allocate and init the specified number of qubits
        qubits = cudaq.qvector(num_qubits)
        
        # Initialize and measure the qubits
        h(qubits)
        mz(qubits)   
    
    ### WORKAROUND: This approach works fine
    '''
    # Allocate and init the specified number of qubits
    qubits = cudaq.qvector(num_qubits)
    
    if method == 1:
          
          # Initialize and measure the qubits
          h(qubits)
          mz(qubits) 
    '''
  #######################
  # MAIN
  
  if __name__ == "__main__":
  
      print(cudaq.draw(kernel_test, 4, 1))
      
      result = cudaq.sample(kernel_test, 4, 1)
      print(f"result: {result}")
      
      print(cudaq.draw(kernel_test, 4, 1))
      
      result = cudaq.sample(kernel_test, 4, 1)
      print(f"result: {result}")

Expected behavior

The number and organization of qubits allocated within a kernel may need to be controlled by kernel parameters.
However, if the qubits themselves are allocated within a block of code executed within a conditional, the qubits appear to be added to a previously created qvector. Instead a new qvector should be created inside each kernel instance.
This can be seen by looking at the circuit diagram that is drawn in the above example after each kernel is created. The second one adds its qubits to the end of the vector instead of creating a new vector.

Simply uncomment the WORKAROUND block above and comment the offending code above it to see what the program should do.

Is this a regression? If it is, put the last known working version (or commit) here.

Not a regression

Environment

  • CUDA Quantum version: "latest" as of May 13, 2024
  • Python version: 3.10.12

Suggestions

This behavior is highly unexpected and may not even be noticed by a user until after the program grows and crashes with out of memory. And then, tracing the source to the allocation inside a conditional can take a bit of time and cause frustration.
For these reasons, the issue should probably be addressed.