peterwilli/sd-leap-booster

Getting error on Colab file: RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor

Closed this issue · 2 comments

The demo isn't working, i tried several times, even with the data provided, it isn't working. Seems to me that you made some change where the weights and data are not being loaded on the GPU, there is some mismatch happening using CPU and GPU.

Both images and weights should be on the GPU. I couldn't figure where the issue might be.
full error:
/usr/local/bin/leap_textual_inversion:7 in │
│ │
│ 4 import('pkg_resources').require('leap-sd==0.0.2') │
│ 5 file = '/content/sd-leap-booster/bin/leap_textual_inversion' │
│ 6 with open(file) as f: │
│ ❱ 7 │ exec(compile(f.read(), file, 'exec')) │
│ 8 │
│ │
│ /content/sd-leap-booster/bin/leap_textual_inversion:781 in │
│ │
│ 778 │
│ 779 │
│ 780 if name == "main": │
│ ❱ 781 │ main() │
│ 782 │
│ │
│ /content/sd-leap-booster/bin/leap_textual_inversion:540 in main │
│ │
│ 537 │ leap = leap_sd.LM.load_from_checkpoint(args.leap_model_path) │
│ 538 │ leap.eval() │
│ 539 │ │
│ ❱ 540 │ boosted_embed = boost_embed(leap, args.train_data_dir) │
│ 541 │ token_embeds[placeholder_token_id] = boosted_embed │
│ 542 │ print(f"Successfully boosted embed to {boosted_embed}") │
│ 543 │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py:115 in │
│ decorate_context │
│ │
│ 112 │ @functools.wraps(func) │
│ 113 │ def decorate_context(*args, **kwargs): │
│ 114 │ │ with ctx_factory(): │
│ ❱ 115 │ │ │ return func(*args, **kwargs) │
│ 116 │ │
│ 117 │ return decorate_context │
│ 118 │
│ │
│ /content/sd-leap-booster/bin/leap_textual_inversion:452 in boost_embed │
│ │
│ 449 │ images = load_images(images_folder) │
│ 450 │ # Simulate single item batch │
│ 451 │ images = images.unsqueeze(0) │
│ ❱ 452 │ embed_model = leap(images) │
│ 453 │ embed_model = embed_model.squeeze() │
│ 454 │ return embed_model │
│ 455 │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in │
│ _call_impl │
│ │
│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or s │
│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hoo │
│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks │
│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │
│ 1502 │ │ # Do not call functions when jit is used │
│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1504 │ │ backward_pre_hooks = [] │
│ │
│ /content/sd-leap-booster/leap_sd/module.py:84 in forward │
│ │
│ 81 │ │ for i in range(images_len): │
│ 82 │ │ │ image_selection = x[:, i, ...] │
│ 83 │ │ │ if xf is None: │
│ ❱ 84 │ │ │ │ xf = self.features(image_selection) │
│ 85 │ │ │ else: │
│ 86 │ │ │ │ xf += self.features(image_selection) │
│ 87 │ │ xf = xf / images_len │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in │
│ _call_impl │
│ │
│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or s │
│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hoo │
│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks │
│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │
│ 1502 │ │ # Do not call functions when jit is used │
│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1504 │ │ backward_pre_hooks = [] │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/container.py:217 in │
│ forward │
│ │
│ 214 │ # with Any as TorchScript expects a more precise type │
│ 215 │ def forward(self, input): │
│ 216 │ │ for module in self: │
│ ❱ 217 │ │ │ input = module(input) │
│ 218 │ │ return input │
│ 219 │ │
│ 220 │ def append(self, module: Module) -> 'Sequential': │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in │
│ _call_impl │
│ │
│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or s │
│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hoo │
│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks │
│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │
│ 1502 │ │ # Do not call functions when jit is used │
│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1504 │ │ backward_pre_hooks = [] │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/container.py:217 in │
│ forward │
│ │
│ 214 │ # with Any as TorchScript expects a more precise type │
│ 215 │ def forward(self, input): │
│ 216 │ │ for module in self: │
│ ❱ 217 │ │ │ input = module(input) │
│ 218 │ │ return input │
│ 219 │ │
│ 220 │ def append(self, module: Module) -> 'Sequential': │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in │
│ _call_impl │
│ │
│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or s │
│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hoo │
│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks │
│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │
│ 1502 │ │ # Do not call functions when jit is used │
│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1504 │ │ backward_pre_hooks = [] │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py:463 in │
│ forward │
│ │
│ 460 │ │ │ │ │ │ self.padding, self.dilation, self.groups) │
│ 461 │ │
│ 462 │ def forward(self, input: Tensor) -> Tensor: │
│ ❱ 463 │ │ return self._conv_forward(input, self.weight, self.bias) │
│ 464 │
│ 465 class Conv3d(_ConvNd): │
│ 466 │ doc = r"""Applies a 3D convolution over an input signal compo │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py:459 in │
│ _conv_forward │
│ │
│ 456 │ │ │ return F.conv2d(F.pad(input, self._reversed_padding_repea │
│ 457 │ │ │ │ │ │ │ weight, bias, self.stride, │
│ 458 │ │ │ │ │ │ │ _pair(0), self.dilation, self.groups) │
│ ❱ 459 │ │ return F.conv2d(input, weight, bias, self.stride, │
│ 460 │ │ │ │ │ │ self.padding, self.dilation, self.groups) │
│ 461 │ │
│ 462 │ def forward(self, input: Tensor) -> Tensor: │
╰──────────────────────────────────────────────────────────────────────────────╯
RuntimeError: Input type (torch.FloatTensor) and weight type
(torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor
and weight is a dense tensor

Thanks.

Thanks. I will get to this after dinner 🍕

Thanks for mentioning @Strangersknowme ! I dont know what happened, but I guess the way that models are loaded has changed. Now I explicitly load LEAP into CPU and things seem fine. Let me know if you have further questions, the updated notebook is here: https://colab.research.google.com/drive/1-uBBQpPlt4k5YDNZiN4H4ICWlkVcitfP?usp=sharing