aria1th/Hypernetwork-MonkeyPatch-Extension

Gamma Training does not seem to be working.

tsukimiya opened this issue · 1 comments

When I run Gamma Training, the following error occurs and it does not seem to work.
The version we are using is
MonkeyPatch ( 4c87144 )
WEB UI ( AUTOMATIC1111/stable-diffusion-webui@0b8911d )
which is the latest at this time.

Is there anything else we should investigate?

Preparing dataset...
100%|█████████████████████████████████████████| 114/114 [00:13<00:00,  8.75it/s]
  0%|                                                  | 0/3500 [00:00<?, ?it/s]Traceback (most recent call last):
  File "/notebooks/stable-diffusion-webui/extensions/Hypernetwork-MonkeyPatch-Extension/patches/external_pr/hypernetwork.py", line 260, in train_hypernetwork
    scaler.scale(loss).backward()
  File "/usr/local/lib/python3.9/dist-packages/torch/_tensor.py", line 396, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
  File "/usr/local/lib/python3.9/dist-packages/torch/autograd/__init__.py", line 173, in backward
    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper__native_layer_norm_backward)

Sorry.
bbc-mc/sdweb-clip-changer
This was due to a conflict with this Extension.
This Issue is closed.