NVIDIA/cuda-quantum

Subtraction of variables gives unhandled BinOp.Sub error within a python kernel

rtvuser1 opened this issue · 6 comments

Required prerequisites

  • Consult the security policy. If reporting a security vulnerability, do not report the bug using this form. Use the process described in the policy to report the issue.
  • Make sure you've read the documentation. Your issue may be addressed there.
  • Search the issue tracker to verify that this hasn't already been reported. +1 or comment there if it has.
  • If possible, make a PR with a failing test to give us a starting point to work on!

Describe the bug

This compound subtraction statement used within a python kernel produces an error when executed:
rz(1.0 - (mu / sigma), qubits[1])

However, it executes properly if re-written as:
rz(1.0 + -(mu / sigma), qubits[1])

The error is:
cudaq.kernel.ast_bridge.CompilerError: zz__pybug_subtract.py:22: error: unhandled BinOp.Sub types.
(offending source -> 1.0 - mu / sigma)

Steps to reproduce the bug

      # A simple test program 
      # PYTHON KERNEL - SUBTRACTION BUG
      
      import cudaq
      
      @cudaq.kernel
      def bug_subtract():
      
          # Allocate a number of qubits
          qubits = cudaq.qvector(4)
      
          # Place some bits into test state
          x(qubits[0:2])
          
          mu = 0.7951
          sigma = 0.6065
             
          rz(1.0 + -(mu / sigma), qubits[1])      # this construct works, but the one below fails
          
          rz(1.0 - (mu / sigma), qubits[1])       # BUG: this line gives an error, one above doesn't
      
          # Apply measurement gates to just the `qubits`
          mz(qubits)
          
      #######################
      
      #### MAIN
       
      if __name__ == "__main__":
          
          result = cudaq.sample(bug_subtract)
          print(f"result: {result}")

Expected behavior

There should be no error on the subject expression

Is this a regression? If it is, put the last known working version (or commit) here.

Not a regression

Environment

  • CUDA Quantum version: 0.7.1
  • Python version: 3.10.12

Suggestions

This is a bad bug as it is such a simple construct.
While the workaround is simple, it will delay the work of users, while they figure out the workaround (as did).

I just gave this a try on latest from main (so past 0.7.1 I think) and it seems to work now. Can anyone else reproduce?

I can confirm the bug still exists in releases/v0.7.1 as of 1926eea (latest commit on that branch as of this morning).

FYI - 0.7.1 is still not officially released yet. The Docker images on our nightly channels are essentially release candidates right now.

I have been using 0.7.1 and that is how I found this bug.
I just now confirmed that the bug no longer occurs on the "latest" image, from which I made a new container. (as reported by Alex).
I have both running in parallel now.

Are you saying that 0.7.1 will be updated (and perhaps renamed to 0.7.2) ?
Or would you expect a fix like this to go into a 0.8.0 ?

I imagine you pulled nvcr.io/nvidia/nightly/cuda-quantum:0.7.1, but that is our nightly channel where images are regularly updated. For version numbered tags in the nightly channel, they should be considered release candidates (and therefore not final). The official place to grab final/released Docker images is here: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/quantum/containers/cuda-quantum/tags.

Since 0.7.1 is in the final stages of testing, we will probably not do another release candidate at this time, so I expect the fix for this particular issue to come in a future release (i.e. 0.7.2 or 0.8.0), or you can keep working w/ latest.

Edit: Also, our official releases are listed on our GitHub page, too. (https://github.com/NVIDIA/cuda-quantum/releases/).

Got it> I assume you will not close this Issue until the fix shows up in an official release?

Got it> I assume you will not close this Issue until the fix shows up in an official release?

GitHub typically closes issues once the PR associated with the issue is merged to main.

On a separate note, PR #1605 is the PR that resolved this issue, and that had already been merged to main on May 2nd. We will leave this issue open until a new PR with a test similar to the one provided in this issue is added to our regular test suite.