MrForExample/ComfyUI-3D-Pack

ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider',

Un1ess opened this issue · 4 comments

Total VRAM 12288 MB, total RAM 65304 MB
pytorch version: 2.2.0+cu121
xformers version: 0.0.24
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
Using xformers cross attention
ASTERR config loaded successfully
Warn!: xFormers is available (Attention)
Warn!: Traceback (most recent call last):
File "D:\ComfyUI_Build\ComfyUI\nodes.py", line 1906, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 940, in exec_module
File "", line 241, in call_with_frames_removed
File "D:\ComfyUI_Build\ComfyUI\custom_nodes\ComfyUI-3D-Pack_init
.py", line 47, in
module = importlib.import_module(f".{nodes_filename}", package=name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "importlib_init_.py", line 126, in import_module
File "", line 1204, in _gcd_import
File "", line 1176, in _find_and_load
File "", line 1147, in _find_and_load_unlocked
File "", line 690, in _load_unlocked
File "", line 940, in exec_module
File "", line 241, in _call_with_frames_removed
File "D:\ComfyUI_Build\ComfyUI\custom_nodes\ComfyUI-3D-Pack\nodes.py", line 76, in
from Unique3D.scripts.mesh_init import fast_geo
File "D:\ComfyUI_Build\ComfyUI\custom_nodes\ComfyUI-3D-Pack\Gen_3D_Modules\Unique3D\scripts\mesh_init.py", line 6, in
from .utils import meshlab_mesh_to_py3dmesh, py3dmesh_to_meshlab_mesh
File "D:\ComfyUI_Build\ComfyUI\custom_nodes\ComfyUI-3D-Pack\Gen_3D_Modules\Unique3D\scripts\utils.py", line 25, in
session = new_session(providers=providers)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_Build\python_embeded\Lib\site-packages\rembg\session_factory.py", line 44, in new_session
return session_class(model_name, sess_opts, providers, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_Build\python_embeded\Lib\site-packages\rembg\sessions\base.py", line 34, in init
self.inner_session = ort.InferenceSession(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Roaming\Python\Python311\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 396, in init
raise e
File "C:\Users\Administrator\AppData\Roaming\Python\Python311\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 383, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "C:\Users\Administrator\AppData\Roaming\Python\Python311\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 415, in _create_inference_session
raise ValueError(
ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)

Warn!: Cannot import D:\ComfyUI_Build\ComfyUI\custom_nodes\ComfyUI-3D-Pack module for custom nodes: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)

Check your custom_nodes directory, explicitly set providers to ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] when initializing the InferenceSession. The issue should go away, just blanket set everything.

Check your custom_nodes directory, explicitly set providers to ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] when initializing the InferenceSession. The issue should go away, just blanket set everything.

I edit D:\ComfyUI_Build\ComfyUI\custom_nodes\ComfyUI-3D-Pack\Gen_3D_Modules\Unique3D\scripts\utils.py
providers= ['CUDAExecutionProvider', 'CPUExecutionProvider']

But errors came again :
Total VRAM 12288 MB, total RAM 65304 MB
pytorch version: 2.2.0+cu121
xformers version: 0.0.24
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
Using xformers cross attention
ASTERR config loaded successfully
Warn!: xFormers is available (Attention)
2024-07-18 10:24:59.4045468 [E:onnxruntime:Default, provider_bridge_ort.cc:1351 onnxruntime::TryGetProviderInfo_CUDA] D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1131 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "D:\ComfyUI_Build\python_embeded\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

EP Error D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:636 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
when using ['CUDAExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
2024-07-18 10:24:59.4817444 [E:onnxruntime:Default, provider_bridge_ort.cc:1351 onnxruntime::TryGetProviderInfo_CUDA] D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1131 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "D:\ComfyUI_Build\python_embeded\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

It seems onnxruntime 1.15 is incompatible with Cuda 1.12 and Cudnn 8.9.7?
I install Cuda 1.12and Cudnn 8.9.7 in my computer

Check your custom_nodes directory, explicitly set providers to ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] when initializing the InferenceSession. The issue should go away, just blanket set everything.检查您的 custom_nodes 目录,在初始化 InferenceSession 时将提供程序明确设置为 ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']。问题应该消失,只需将所有内容都设置好即可。

Oh,yeah,u are right.
I edit the code :

**providersCustom = ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']

session = new_session(providers=providersCustom)**

And because I install ZHO-ZHO YOLOWorld plugin, there is inference-gpu need onnxruntime-gpu 1.15.1.And if i upgrade onnxruntime-gpu , other site-packages need to change version .
So, i finally install the 11.8 Cuda and v8.9.0 Cudnn in my pc, the problem sovled.

thanks!

That's great news! Glad it worked out