AssertionError: Torch not compiled with CUDA enabled
hansolocambo opened this issue · 10 comments
First I get this error message when running the WebUI :
text_proj\diffusion_pytorch_model.safetensors not found
And trying to generate an image :
AssertionError: Torch not compiled with CUDA enabled
But I already installed :
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
and the CMD tells me that "Requirement already satisfied"
Running on Windows11. RTX 3090. Intel i9-12900K
Did you use the steps for the Windows install?
This command forces the reinstall of Pytorch:
pip install --upgrade --force-reinstall torch --extra-index-url https://download.pytorch.org/whl/cu117
Thanks for the answer. Yes I followed the Windows Installation steps one after the other.
I ran your command to force reinstall Torch.
For all collected packages (mpmath, typing-extensions, sympy, networkx, etc.) I had the same message each time :
- Found existing installation of XXX > Uninstalling XXX > Successfully uninstalled XXX
- And at the end they are all re-installed :
" Successfully installed MarkupSafe-2.1.2 filelock-3.11.0 jinja2-3.1.2 mpmath-1.3.0 networkx-3.1 sympy-1.11.1 torch-2.0.0+cu117 typing-extensions-4.5.0 "
Sadly nothing changed :/
"AssertionError: Torch not compiled with CUDA enabled"
I'm using Automatic1111 or InvokeAI without trouble for months and would have loved to try karlo. Sadly there's very little info around to help solving such issues.
It seems like the correct Pytorch is installed. Does the rest of the app open correctly or is it just when you click generate?
If possible, could you try copying the entire output of the error so I could take a look.
The app starts properly as the problem occurs when I click on Generate.
I then get a lengthy traceback in the webUI (and same in the CMD window) :
File "E:\Automatic1111\stable-karlo.env\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
exec(code, module.dict)
File "E:\Automatic1111\stable-karlo\app.py", line 143, in
main()
File "E:\Automatic1111\stable-karlo\app.py", line 104, in main
images = generate(
File "E:\Automatic1111\stable-karlo\models\generate.py", line 83, in generate
pipe = make_pipeline_generator(cpu=cpu)
File "E:\Automatic1111\stable-karlo.env\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 717, in wrapped_func
return get_or_create_cached_value()
File "E:\Automatic1111\stable-karlo.env\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 696, in get_or_create_cached_value
return_value = non_optional_func(*args, **kwargs)
File "E:\Automatic1111\stable-karlo\models\generate.py", line 42, in make_pipeline_generator
pipe = pipe.to("cuda")
File "E:\Automatic1111\stable-karlo.env\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 396, in to
module.to(torch_device)
File "E:\Automatic1111\stable-karlo.env\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "E:\Automatic1111\stable-karlo.env\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "E:\Automatic1111\stable-karlo.env\lib\site-packages\torch\nn\modules\module.py", line 820, in apply
param_applied = fn(param)
File "E:\Automatic1111\stable-karlo.env\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "E:\Automatic1111\stable-karlo.env\lib\site-packages\torch\cuda_init.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
Could you try running this command from the same place you run the streamlit
command and see the output.
python -c "import torch; print(torch.__version__)"
Sure ;) Thanks for trying to figure out what's wrong (b^-^)b
python -c "import torch; print(torch.version)"
2.0.0+cu117
I just pushed an update to use the new Streamlit cache and I think this might solve the issue.
To get the newest version run git pull
in the stable-karlo folder. Hope it works!
Trying that now. I also hope this'll work ;)
Sorry to be the bearer of bad news. But it didn't change the problem sadly. After clicking on generate I get :
2023-04-07 01:04:26.656 Uncaught app exception
Traceback (most recent call last):
File "E:\Automatic1111\stable-karlo.env\lib\site-packages\streamlit\runtime\caching\cache_utils.py", line 245, in _get_or_create_cached_value
cached_result = cache.read_result(value_key)
File "E:\Automatic1111\stable-karlo.env\lib\site-packages\streamlit\runtime\caching\cache_resource_api.py", line 447, in read_result
raise CacheKeyNotFoundError()
streamlit.runtime.caching.cache_errors.CacheKeyNotFoundError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:\Automatic1111\stable-karlo.env\lib\site-packages\streamlit\runtime\caching\cache_utils.py", line 293, in _handle_cache_miss
cached_result = cache.read_result(value_key)
File "E:\Automatic1111\stable-karlo.env\lib\site-packages\streamlit\runtime\caching\cache_resource_api.py", line 447, in read_result
raise CacheKeyNotFoundError()
streamlit.runtime.caching.cache_errors.CacheKeyNotFoundError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:\Automatic1111\stable-karlo.env\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
exec(code, module.dict)
File "E:\Automatic1111\stable-karlo\app.py", line 143, in
main()
File "E:\Automatic1111\stable-karlo\app.py", line 104, in main
images = generate(
File "E:\Automatic1111\stable-karlo\models\generate.py", line 83, in generate
pipe = make_pipeline_generator(cpu=cpu)
File "E:\Automatic1111\stable-karlo.env\lib\site-packages\streamlit\runtime\caching\cache_utils.py", line 194, in wrapper
return cached_func(*args, **kwargs)
File "E:\Automatic1111\stable-karlo.env\lib\site-packages\streamlit\runtime\caching\cache_utils.py", line 225, in call
return self._get_or_create_cached_value(args, kwargs)
File "E:\Automatic1111\stable-karlo.env\lib\site-packages\streamlit\runtime\caching\cache_utils.py", line 248, in _get_or_create_cached_value
return self._handle_cache_miss(cache, value_key, func_args, func_kwargs)
File "E:\Automatic1111\stable-karlo.env\lib\site-packages\streamlit\runtime\caching\cache_utils.py", line 302, in _handle_cache_miss
computed_value = self._info.func(*func_args, **func_kwargs)
File "E:\Automatic1111\stable-karlo\models\generate.py", line 42, in make_pipeline_generator
pipe = pipe.to("cuda")
File "E:\Automatic1111\stable-karlo.env\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 396, in to
module.to(torch_device)
File "E:\Automatic1111\stable-karlo.env\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "E:\Automatic1111\stable-karlo.env\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "E:\Automatic1111\stable-karlo.env\lib\site-packages\torch\nn\modules\module.py", line 820, in apply
param_applied = fn(param)
File "E:\Automatic1111\stable-karlo.env\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "E:\Automatic1111\stable-karlo.env\lib\site-packages\torch\cuda_init.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
That's unfortunate. I can't reproduce the error on my end, but I'll continue to look into it.
You could try the Colab version in the meantime.
Nah ;) I'll wait for a future update. Sorry that my reports didn't help you understand whatever is happening.
I can run 250 images at 1024x576 in a bit more than 5 minutes with my config. Colabs are just way too slow.
Cheers. Have a good day.