RuntimeError: CUDA out of memory.
KAJPER opened this issue · 2 comments
KAJPER commented
Setting jit to False because torch version is not 1.7.1.
c:\users\dzban\appdata\local\programs\python\python39\lib\site-packages\torch\functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ..\aten\src\ATen\native\TensorShape.cpp:2157.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Starting up...
Imagining "a_prompt" from the depths of my weights...
iteration: 0%| | 0/1050 [00:00<?, ?it/s]
epochs: 0%| | 0/20 [00:00<?, ?it/s]
Traceback (most recent call last):
File "c:\users\dzban\appdata\local\programs\python\python39\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "c:\users\dzban\appdata\local\programs\python\python39\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\dzban\AppData\Local\Programs\Python\Python39\Scripts\imagine.exe\__main__.py", line 7, in <module>
File "c:\users\dzban\appdata\local\programs\python\python39\lib\site-packages\deep_daze\cli.py", line 151, in main
fire.Fire(train)
File "c:\users\dzban\appdata\local\programs\python\python39\lib\site-packages\fire\core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "c:\users\dzban\appdata\local\programs\python\python39\lib\site-packages\fire\core.py", line 466, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "c:\users\dzban\appdata\local\programs\python\python39\lib\site-packages\fire\core.py", line 681, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "c:\users\dzban\appdata\local\programs\python\python39\lib\site-packages\deep_daze\cli.py", line 147, in train
imagine()
File "c:\users\dzban\appdata\local\programs\python\python39\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "c:\users\dzban\appdata\local\programs\python\python39\lib\site-packages\deep_daze\deep_daze.py", line 584, in forward
_, loss = self.train_step(epoch, i)
File "c:\users\dzban\appdata\local\programs\python\python39\lib\site-packages\deep_daze\deep_daze.py", line 505, in train_step
out, loss = self.model(self.clip_encoding)
File "c:\users\dzban\appdata\local\programs\python\python39\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "c:\users\dzban\appdata\local\programs\python\python39\lib\site-packages\deep_daze\deep_daze.py", line 200, in forward
out = self.model()
File "c:\users\dzban\appdata\local\programs\python\python39\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "c:\users\dzban\appdata\local\programs\python\python39\lib\site-packages\siren_pytorch\siren_pytorch.py", line 148, in forward
out = self.net(coords, mods)
File "c:\users\dzban\appdata\local\programs\python\python39\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "c:\users\dzban\appdata\local\programs\python\python39\lib\site-packages\siren_pytorch\siren_pytorch.py", line 83, in forward
x = layer(x)
File "c:\users\dzban\appdata\local\programs\python\python39\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "c:\users\dzban\appdata\local\programs\python\python39\lib\site-packages\siren_pytorch\siren_pytorch.py", line 51, in forward
out = self.activation(out)
File "c:\users\dzban\appdata\local\programs\python\python39\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "c:\users\dzban\appdata\local\programs\python\python39\lib\site-packages\siren_pytorch\siren_pytorch.py", line 22, in forward
return torch.sin(self.w0 * x)
RuntimeError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 6.00 GiB total capacity; 4.22 GiB already allocated; 0 bytes free; 4.24 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Can someone help?
geeknik commented
I would suggest messing with the following options until you can get it to run:
--image_width
== default is 512
--num_layers
== default is 16
Good luck!
KAJPER commented
Sugerowałbym mieszanie się z następującymi opcjami, dopóki nie będzie można go uruchomić:
--image_width
== domyślnie 512--num_layers
== domyślnie 16Powodzenia!
Thanks i will try it : )