NVIDIA/cuda-quantum

Exception (`invalid value in observe`) occurs when distributing an `observe` call over more QPUs than terms in the `spin_op`

bmhowe23 opened this issue · 0 comments

Required prerequisites

  • Consult the security policy. If reporting a security vulnerability, do not report the bug using this form. Use the process described in the policy to report the issue.
  • Make sure you've read the documentation. Your issue may be addressed there.
  • Search the issue tracker to verify that this hasn't already been reported. +1 or comment there if it has.
  • If possible, make a PR with a failing test to give us a starting point to work on!

Describe the bug

When running a test in a multi-QPU environment and attempting to distribute the Hamiltonian over those QPUs, if the number of terms in the Hamiltonian is too small, then the simulation will fail with the following error:

terminate called after throwing an instance of 'std::runtime_error'
  what():  [custatevec] %invalid value in observe (line 536)

This needs to be fixed in the following targets:

  • nvidia, nvidia-fp64
  • nvidia-mgpu

Steps to reproduce the bug

The following pytest will demonstrate the error, even on a single GPU

def test_empty_spin_op():

    @cudaq.kernel
    def circuit(theta: float):
        q = cudaq.qvector(2)
        x(q[0])
        ry(theta, q[1])
        x.ctrl(q[1], q[0])

    h = spin.z(0)
    batched = h.distribute_terms(2)
    assert batched[1].get_term_count() == 0
    assert cudaq.observe(circuit, batched[1], .59).expectation() == 0

Expected behavior

The observe call should succeed even when the number of terms in the Hamiltonian is small relative to the number of QPUs that it is being asked to distribute the calculation over.

Is this a regression? If it is, put the last known working version (or commit) here.

Yes, I believe PR #1002 introduced the problem

Environment

  • CUDA Quantum version: 035bd3b
  • Python version: 3.10
  • C++ compiler: N/A
  • Operating system: Ubuntu 22.04

Suggestions

PR is forthcoming