最后一步python gradio_app.py报错了
lgkt opened this issue · 6 comments
(omost) PS D:\ai\Omost> python gradio_app.py
D:\ai\Omost\lib_omost\pipeline.py:64: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
alphas_cumprod = torch.tensor(np.cumprod(alphas, axis=0), dtype=torch.float32)
Unload to CPU: AutoencoderKL
Unload to CPU: CLIPTextModel
Unload to CPU: CLIPTextModel
Unload to CPU: UNet2DConditionModel
Unused kwargs: ['_load_in_4bit', '_load_in_8bit', 'quant_method']. These kwargs are not used in <class 'transformers.utils.quantization_config.BitsAndBytesConfig'>.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:03<00:00, 1.55s/it]
WARNING:root:Some parameters are on the meta device device because they were offloaded to the cpu.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
WARNING:accelerate.big_modeling:You shouldn't move a model that is dispatched using accelerate hooks.
Traceback (most recent call last):
File "D:\ai\Omost\gradio_app.py", line 87, in
memory_management.unload_all_models(llm_model)
File "D:\ai\Omost\lib_omost\memory_management.py", line 67, in unload_all_models
return load_models_to_gpu([])
File "D:\ai\Omost\lib_omost\memory_management.py", line 42, in load_models_to_gpu
m.to(cpu)
File "C:\Users\lgkt\AppData\Roaming\Python\Python310\site-packages\accelerate\big_modeling.py", line 455, in wrapper
raise RuntimeError("You can't move a model that has some modules offloaded to cpu or disk.")
RuntimeError: You can't move a model that has some modules offloaded to cpu or disk.
如果你使用的是原版代码,可以尝试其他pr,或者自己修改
If you're using the original code, you can try other PRs, or modify it yourself
"D:\ai\Omost\lib_omost\memory_management.py"这个文件我倒是找到了,但是咋改呢
@lllyasviel 敏神