netsharecmu/NetShare

TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

EricXiaozj opened this issue · 1 comments

Hello,
I meet this error while I'm trying to generate data using GPU. The whole error message is here:

Traceback (most recent call last):
  File "/home/ubuntu/xzj/NetShare-new/netshare/models/model.py", line 34, in generate
    return self._generate(
  File "/home/ubuntu/xzj/NetShare-new/netshare/models/doppelganger_torch_model.py", line 247, in _generate
    ) = dg.generate(
  File "/home/ubuntu/xzj/NetShare-new/netshare/models/doppelganger_torch/doppelganger.py", line 237, in generate
    attribute, attribute_discrete, feature = tuple(
  File "/home/ubuntu/xzj/NetShare-new/netshare/models/doppelganger_torch/doppelganger.py", line 238, in <genexpr>
    np.concatenate(d, axis=0) for d in zip(*generated_data_list)
  File "<__array_function__ internals>", line 200, in concatenate
  File "/home/ubuntu/.conda/envs/NetShare-new/lib/python3.9/site-packages/torch/_tensor.py", line 970, in __array__
    return self.numpy()
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

It's just a little bug, I solved it by changing the function DoppelGANger._generate in ./netshare/models/doppelganger_torch/doppelganger.py.
The changed code is like this:

def _generate(
        self,
        real_attribute_noise,
        addi_attribute_noise,
        feature_input_noise,
        h0,
        c0,
        given_attribute=None,
        given_attribute_discrete=None,
    ):

        self.generator.eval()
        self.discriminator.eval()
        if self.use_attr_discriminator:
            self.attr_discriminator.eval()

        if given_attribute is None and given_attribute_discrete is None:
            with torch.no_grad():
                attribute, attribute_discrete, feature = self.generator(
                    real_attribute_noise=real_attribute_noise.to(self.device),
                    addi_attribute_noise=addi_attribute_noise.to(self.device),
                    feature_input_noise=feature_input_noise.to(self.device),
                    h0=h0.to(self.device),
                    c0=c0.to(self.device)
                )
        else:
            given_attribute = torch.from_numpy(given_attribute).float()
            given_attribute_discrete = torch.from_numpy(
                given_attribute_discrete).float()
            with torch.no_grad():
                attribute, attribute_discrete, feature = self.generator(
                    real_attribute_noise=real_attribute_noise.to(self.device),
                    addi_attribute_noise=addi_attribute_noise.to(self.device),
                    feature_input_noise=feature_input_noise.to(self.device),
                    h0=h0.to(self.device),
                    c0=c0.to(self.device),
                    given_attribute=given_attribute.to(self.device),
                    given_attribute_discrete=given_attribute_discrete.to(self.device),
                )
        return attribute.cpu(), attribute_discrete.cpu(), feature.cpu()

Hi @EricXiaozj, thanks a lot for bringing up the issue and sharing the solution.

We just did the initial migration to pytorch and haven't tested it with GPU yet. Glad you figured it out! We will make a patch of this shortly.

Updated: I have applied the proposed changes in #28. Cheers!