getting the following cuda error
gsgoldma opened this issue · 10 comments
Traceback:
File "D:\stable-karlo.env\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
exec(code, module.dict)
File "D:\stable-karlo\app.py", line 105, in
main()
File "D:\stable-karlo\app.py", line 69, in main
images = generate(
File "D:\stable-karlo\model\generate.py", line 68, in generate
pipe = make_pipe()
File "D:\stable-karlo.env\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 625, in wrapped_func
return get_or_create_cached_value()
File "D:\stable-karlo.env\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 609, in get_or_create_cached_value
return_value = non_optional_func(*args, **kwargs)
File "D:\stable-karlo\model\generate.py", line 41, in make_pipe
return pipe.to("cuda")
File "D:\stable-karlo.env\lib\site-packages\diffusers\pipeline_utils.py", line 270, in to
module.to(torch_device)
File "D:\stable-karlo.env\lib\site-packages\torch\nn\modules\module.py", line 989, in to
return self._apply(convert)
File "D:\stable-karlo.env\lib\site-packages\torch\nn\modules\module.py", line 641, in _apply
module._apply(fn)
File "D:\stable-karlo.env\lib\site-packages\torch\nn\modules\module.py", line 641, in _apply
module._apply(fn)
File "D:\stable-karlo.env\lib\site-packages\torch\nn\modules\module.py", line 664, in apply
param_applied = fn(param)
File "D:\stable-karlo.env\lib\site-packages\torch\nn\modules\module.py", line 987, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "D:\stable-karlo.env\lib\site-packages\torch\cuda_init.py", line 221, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
It seems like your Pytorch installation doesn't have CUDA enabled. You can check on the Pytorch website for how to install it with CUDA enabled for your system.
On Windows, after the source
step, try running this:
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu117
I tried that but still receive the error. chatgpt recommended i install cuda toolkit, but that has no effect. i also get this at the beginning:
text_encoder\model.safetensors not found
Fetching 26 files: 100%|##########| 26/26 [00:00<00:00, 1663.92it/s]
Same here. Fix doesn't work either.
Are you sure you have an Nvidia GPU that is compatible with CUDA? Perhaps you don't have the CUDA toolkit installed.
Here is a link to download the CUDA toolkit: https://developer.nvidia.com/cuda-downloads
After you've set up CUDA, go into the stable-karlo
folder, activate the environment, and try this:
pip install -r requirements.txt
pip install --upgrade --force-reinstall torch --extra-index-url https://download.pytorch.org/whl/cu117
That should force Pytorch to rebuild with CUDA support.
@andybak Are you also running it on Windows? Do you think you could attach the error message you're getting?
i actually got the error resolved. i am on windows. i screwed up by making my own env instead of using the .env to install the packages. unforunately, got an oom error, so it seems like i don't have enough vram. i guess ill wait till it gets optimized more
the bash command wasn't being recognized on my pc, even though i was doing it in gitbash. i was probably doing something wrong. the code is below.
git clone https://github.com/kpthedev/stable-karlo.git
cd stable-karlo
python -m venv .env
source .env/bin/activate <---- changed this for conda
pip install -r requirements.txt
so i just used conda/command line and chatpgpt told me to just use .env/scripts/activate.bat so i could activate the env .env to install the requirements, then it worked. i may have had to force install the torch from the comment earlier, but i don't remember if that was actually necessary.
the bash command wasn't being recognized on my pc, even though i was doing it in gitbash.
Yeah, I got a chance to test on Windows and I had to use the activate.bat
with the torch reinstall as you said.
As for the OOM errors, you can try the cpu-offloading
branch. I was able to generate Karlo images with 8GB of VRAM, but the upscaling requires way more.