picobyte/stable-diffusion-webui-wd14-tagger

It's can't work,my CUDA is v12.1

xlibfly opened this issue · 14 comments

Traceback (most recent call last):
File "D:\miaoshouai-sd-webui-v230222full\python\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "D:\miaoshouai-sd-webui-v230222full\python\lib\site-packages\gradio\blocks.py", line 1434, in process_api
data = self.postprocess_data(fn_index, result["prediction"], state)
File "D:\miaoshouai-sd-webui-v230222full\python\lib\site-packages\gradio\blocks.py", line 1297, in postprocess_data
self.validate_outputs(fn_index, predictions) # type: ignore
File "D:\miaoshouai-sd-webui-v230222full\python\lib\site-packages\gradio\blocks.py", line 1272, in validate_outputs
raise ValueError(
ValueError: An event handler (on_interrogate_image_submit) didn't receive enough output values (needed: 7, received: 3).
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。
Wanted outputs:
[state, html, html, label, label, label, html]
Received outputs:
[None, "", "

RuntimeError: D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:743 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

Time taken: 1.1 sec.

A: 2.20 GB, R: 2.28 GB, Sys: 3.4/16 GB (21.4%)

"]

same here

me too.

me too, torch: 2.1.1+cu121

I have the same problem

I've tried to used WD14 in the Data Editor extension (https://github.com/toshiaki1729/stable-diffusion-webui-dataset-tag-editor) and for some reason it works for me.

same, torch: 2.1.2+cu121

I've tried to used WD14 in the Data Editor extension (https://github.com/toshiaki1729/stable-diffusion-webui-dataset-tag-editor) and for some reason it works for me.

It still doesn’t work, showing the same error.

Same error. Any solution?

cuda12.3-->12.1 same erro

Same error, no solution :(

same here QQ

By downloading this version https://github.com/67372a/stable-diffusion-webui-wd14-tagger
I solved my problem

After I removed the local CUDA Toolkit, Tagger ran successfully.

我尝试在数据编辑器扩展 (https://github.com/toshiaki1729/stable-diffusion-webui-dataset-tag-editor) 中使用 WD14,出于某种原因,它对我有用。

it didn't work. :(
respond same error
RuntimeError: D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:891 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

I've tried to fix it for over 8h, but nothing worked.
ONNX Runtime版本是1.18.1,CUDA版本是12.4,cuDNN版本是9.3.0,torch版本是2.4.0,torchaudio版本是2.4.0,torchvision版本是0.19.0,显卡驱动版本是555.85,为什么使用SD的WD 14 标签 这个插件会显示onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded.
版本→version

C:\Users\Administrator>echo %CUDA_PATH%
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4

import torch
print(torch.cuda.is_available())
True

CUDA_PATH
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4

CUDA_PATH_V12_4
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4

import onnxruntime as ort
providers = ort.get_available_providers()
print(providers)
['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']

"I've checked everything thoroughly, following the official documentation, but I still can't use the reverse prompt suggestion plugin. Interestingly, this plugin was working before I installed CUDA. I'm wondering if the issue could be with the plugin itself?"