smthemex/ComfyUI_EchoMimic

comfyui 节点能正常安装没红,然后跑起来出现了错误(急急)

Opened this issue · 6 comments

comfyui 节点能正常安装没红,然后跑起来出现了错误,希望作者给个指导和提示,我是不知道啥原因,搞了半天没搞明白,我是怀疑是不是torch库版本太新导致的?
平台:linux,wsl
python版本:3.12
torch版本:2.5.1+cu124
运行了echomimic 这个节点,停在了#118这个位置上,如下图
70fa731504020e8d2d2d138a5c5f23f
环境配置我的操作:我拉取comfyui的代码仓库,然后根据他的installing(linux)进行了按照,然后安装了这个网址的工作流,https://www.liblib.art/modelinfo/634250d8dbbf469880a936132265cf51?from=search&versionUuid=2841c2b54bf5484682069d63ae928f7a
也就是图片中的工作流,cosyvoice 能跑通,跑到echomimi 就出现了报错

报错代码:
报错段一:
(comfyui) ComfyUI ➤ python main.py git:master
[START] Security scan
[DONE] Security scan

ComfyUI-Manager: installing dependencies done.

** ComfyUI startup time: 2024-12-14 00:17:25.973294
** Platform: Linux
** Python version: 3.12.8 | packaged by conda-forge | (main, Dec 5 2024, 14:24:40) [GCC 13.3.0]
** Python executable: /home/ialover/anaconda3/envs/comfyui/bin/python
** ComfyUI Path: /home/ialover/document/ComfyUI
** Log path: /home/ialover/document/ComfyUI/comfyui.log

Prestartup times for custom nodes:
0.0 seconds: /home/ialover/document/ComfyUI/custom_nodes/rgthree-comfy
1.6 seconds: /home/ialover/document/ComfyUI/custom_nodes/ComfyUI-Manager

Total VRAM 8188 MB, total RAM 15842 MB
pytorch version: 2.5.1+cu124
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4060 Laptop GPU : cudaMallocAsync
Using pytorch cross attention
[Prompt Server] web root: /home/ialover/document/ComfyUI/web
Traceback (most recent call last):
File "/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/diffusers/utils/import_utils.py", line 811, in _get_module
return importlib.import_module("." + module_name, self.name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ialover/anaconda3/envs/comfyui/lib/python3.12/importlib/init.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "", line 1387, in _gcd_import
File "", line 1360, in _find_and_load
File "", line 1331, in _find_and_load_unlocked
File "", line 935, in _load_unlocked
File "", line 999, in exec_module
File "", line 488, in _call_with_frames_removed
File "/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/diffusers/loaders/lora_pipeline.py", line 33, in
from .lora_base import LoraBaseMixin
File "/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/diffusers/loaders/lora_base.py", line 47, in
from peft.tuners.tuners_utils import BaseTunerLayer
File "/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/peft/init.py", line 22, in
from .auto import (
File "/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/peft/auto.py", line 32, in
from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING
File "/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/peft/mapping.py", line 25, in
from .mixed_model import PeftMixedModel
File "/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/peft/mixed_model.py", line 29, in
from .peft_model import PeftModel
File "/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/peft/peft_model.py", line 37, in
from transformers import Cache, DynamicCache, EncoderDecoderCache, PreTrainedModel
ImportError: cannot import name 'EncoderDecoderCache' from 'transformers' (/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/transformers/init.py)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/diffusers/utils/import_utils.py", line 811, in _get_module
return importlib.import_module("." + module_name, self.name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ialover/anaconda3/envs/comfyui/lib/python3.12/importlib/init.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "", line 1387, in _gcd_import
File "", line 1360, in _find_and_load
File "", line 1331, in _find_and_load_unlocked
File "", line 935, in _load_unlocked
File "", line 999, in exec_module
File "", line 488, in _call_with_frames_removed
File "/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py", line 24, in
from ...loaders import FromSingleFileMixin, IPAdapterMixin, StableDiffusionLoraLoaderMixin, TextualInversionLoaderMixin
File "", line 1412, in _handle_fromlist
File "/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/diffusers/utils/import_utils.py", line 801, in getattr
module = self._get_module(self._class_to_module[name])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/diffusers/utils/import_utils.py", line 813, in _get_module
raise RuntimeError(
RuntimeError: Failed to import diffusers.loaders.lora_pipeline because of the following error (look up to see its traceback):
cannot import name 'EncoderDecoderCache' from 'transformers' (/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/transformers/init.py)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/ialover/document/ComfyUI/nodes.py", line 2035, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 999, in exec_module
File "", line 488, in _call_with_frames_removed
File "/home/ialover/document/ComfyUI/custom_nodes/StyleShot-ComfyUI/init.py", line 9, in
from diffusers import StableDiffusionPipeline,UNet2DConditionModel, ControlNetModel,StableDiffusionAdapterPipeline, T2IAdapter
File "", line 1412, in _handle_fromlist
File "/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/diffusers/utils/import_utils.py", line 802, in getattr
value = getattr(module, name)
^^^^^^^^^^^^^^^^^^^^^
File "/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/diffusers/utils/import_utils.py", line 802, in getattr
value = getattr(module, name)
^^^^^^^^^^^^^^^^^^^^^
File "/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/diffusers/utils/import_utils.py", line 801, in getattr
module = self._get_module(self._class_to_module[name])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/diffusers/utils/import_utils.py", line 813, in _get_module
raise RuntimeError(
RuntimeError: Failed to import diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion because of the following error (look up to see its traceback):
Failed to import diffusers.loaders.lora_pipeline because of the following error (look up to see its traceback):
cannot import name 'EncoderDecoderCache' from 'transformers' (/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/transformers/init.py)

Cannot import /home/ialover/document/ComfyUI/custom_nodes/StyleShot-ComfyUI module for custom nodes: Failed to import diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion because of the following error (look up to see its traceback):
Failed to import diffusers.loaders.lora_pipeline because of the following error (look up to see its traceback):
cannot import name 'EncoderDecoderCache' from 'transformers' (/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/transformers/init.py)
(pysssss:WD14Tagger) [DEBUG] Available ORT providers: AzureExecutionProvider, CPUExecutionProvider
(pysssss:WD14Tagger) [DEBUG] Using ORT providers: CUDAExecutionProvider, CPUExecutionProvider

Loading: ComfyUI-Manager (V2.55.5)

ComfyUI Version: v0.3.7-27-g4e14032 | Released on '2024-12-13'

/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/timm/models/layers/init.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {name} is deprecated, please import via timm.layers", FutureWarning)
Setting up [LPIPS] perceptual loss: trunk [alex], v[0.1], spatial [off]
/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
warnings.warn(
/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or None for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing weights=AlexNet_Weights.IMAGENET1K_V1. You can also use weights=AlexNet_Weights.DEFAULT to get the most up-to-date weights.
warnings.warn(msg)
Loading model from: /home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/lpips/weights/v0.1/alex.pth
/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/lpips/lpips.py:107: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
self.load_state_dict(torch.load(model_path, map_location='cpu'), strict=False)
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json

[rgthree-comfy] Loaded 42 epic nodes. 🎉

[START] ComfyUI AlekPet Nodes v1.0.37

Node -> ChatGLMNode: ChatGLM4TranslateCLIPTextEncodeNode, ChatGLM4TranslateTextNode, ChatGLM4InstructNode, ChatGLM4InstructMediaNode [Loading]
Node -> ArgosTranslateNode: ArgosTranslateCLIPTextEncodeNode, ArgosTranslateTextNode [Loading]
Node -> DeepTranslatorNode: DeepTranslatorCLIPTextEncodeNode, DeepTranslatorTextNode [Loading] Node -> GoogleTranslateNode: GoogleTranslateCLIPTextEncodeNode, GoogleTranslateTextNode [Loading]

Node -> PoseNode: PoseNode [Loading]
Node -> ExtrasNode: PreviewTextNode, HexToHueNode, ColorsCorrectNode [Loading]
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
Node -> IDENode: IDENode [Loading]
Node -> PainterNode: PainterNode [Loading]

[END] ComfyUI AlekPet Nodes

[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/albumentations/init.py:13: UserWarning: A new version of Albumentations is available: 1.4.22 (you have 1.4.15). Upgrade using: pip install -U albumentations. To disable automatic update checks, set the environment variable NO_ALBUMENTATIONS_UPDATE to 1.
check_for_updates()
Adding FFMPEG_PATH to PATH
Total VRAM 8188 MB, total RAM 15842 MB
pytorch version: 2.5.1+cu124
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4060 Laptop GPU : cudaMallocAsync

Import times for custom nodes:
0.0 seconds: /home/ialover/document/ComfyUI/custom_nodes/websocket_image_save.py
0.0 seconds: /home/ialover/document/ComfyUI/custom_nodes/ComfyUI-WD14-Tagger
0.0 seconds: /home/ialover/document/ComfyUI/custom_nodes/ComfyUI-Custom-Scripts
0.0 seconds: /home/ialover/document/ComfyUI/custom_nodes/ComfyUI-KJNodes
0.0 seconds: /home/ialover/document/ComfyUI/custom_nodes/rgthree-comfy
0.0 seconds: /home/ialover/document/ComfyUI/custom_nodes/ComfyUI_essentials
0.0 seconds: /home/ialover/document/ComfyUI/custom_nodes/RES4LYF
0.1 seconds: /home/ialover/document/ComfyUI/custom_nodes/ComfyUI-Manager
0.2 seconds: /home/ialover/document/ComfyUI/custom_nodes/ComfyUI-VideoHelperSuite
0.3 seconds (IMPORT FAILED): /home/ialover/document/ComfyUI/custom_nodes/StyleShot-ComfyUI
0.4 seconds: /home/ialover/document/ComfyUI/custom_nodes/VideoSys-ComfyUI
0.5 seconds: /home/ialover/document/ComfyUI/custom_nodes/DiffMorpher-ComfyUI
0.7 seconds: /home/ialover/document/ComfyUI/custom_nodes/ComfyUI_Custom_Nodes_AlekPet
0.8 seconds: /home/ialover/document/ComfyUI/custom_nodes/CosyVoice-ComfyUI
1.2 seconds: /home/ialover/document/ComfyUI/custom_nodes/ComfyUI_EchoMimic
2.6 seconds: /home/ialover/document/ComfyUI/custom_nodes/IMAGDressing-ComfyUI

Starting server

To see the GUI go to: http://127.0.0.1:8188

报错段2:
FETCH DATA from: /home/ialover/document/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [DONE]
got prompt
****** refer in EchoMimic V2 mode!******
Error while downloading from https://cdn-lfs-us-1.hf.co/repos/d0/d7/d0d7e7eb7185076d8321e5bb078d18b04759ff546da59b402ad8542a261b83ff/15d5d2bb5d184eaf9475285f0d5068bde32ae70c3961c799559ee3a6a25afabd?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27motion_module.pth%3B+filename%3D%22motion_module.pth%22%3B&Expires=1734368470&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTczNDM2ODQ3MH19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmhmLmNvL3JlcG9zL2QwL2Q3L2QwZDdlN2ViNzE4NTA3NmQ4MzIxZTViYjA3OGQxOGIwNDc1OWZmNTQ2ZGE1OWI0MDJhZDg1NDJhMjYxYjgzZmYvMTVkNWQyYmI1ZDE4NGVhZjk0NzUyODVmMGQ1MDY4YmRlMzJhZTcwYzM5NjFjNzk5NTU5ZWUzYTZhMjVhZmFiZD9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSoifV19&Signature=DfLnbzm5cbnPFmSnzyAGPuXBn6eJQKJeK7OgcprVtKBrlzGmYlCHoJulXb42y6m-FfB0N0EfS6gFH6WPn6JuCNp6k-VFFCJH7rPr99mqfqpjLSB99PqP4PBLIj8KslwijORMV%7El7SdtSYwFLptt0yp6lOau2%7EVuzfJyOyYVp3IBAIii64aQrgSB0rCNOFDvYtlAjvURJLScHMmtmsfV8p4H4otOaL8PoLZWh%7E-RHZfWezs7ZikWfWQ4R9k3Qvf5IUROE2e1GJCTaV5uIHupfIBTQ2tUr6v21s1Opc7O6X4DpGq37rQ0lQYAUvzZBq9FMwZqi-6OUwn3o9eDFAyzSCg__&Key-Pair-Id=K24J24Z295AEI9: HTTPSConnectionPool(host='cdn-lfs-us-1.hf.co', port=443): Read timed out.
Trying to resume download...
Error while downloading from https://cdn-lfs-us-1.hf.co/repos/d0/d7/d0d7e7eb7185076d8321e5bb078d18b04759ff546da59b402ad8542a261b83ff/15d5d2bb5d184eaf9475285f0d5068bde32ae70c3961c799559ee3a6a25afabd?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27motion_module.pth%3B+filename%3D%22motion_module.pth%22%3B&Expires=1734368470&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTczNDM2ODQ3MH19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmhmLmNvL3JlcG9zL2QwL2Q3L2QwZDdlN2ViNzE4NTA3NmQ4MzIxZTViYjA3OGQxOGIwNDc1OWZmNTQ2ZGE1OWI0MDJhZDg1NDJhMjYxYjgzZmYvMTVkNWQyYmI1ZDE4NGVhZjk0NzUyODVmMGQ1MDY4YmRlMzJhZTcwYzM5NjFjNzk5NTU5ZWUzYTZhMjVhZmFiZD9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSoifV19&Signature=DfLnbzm5cbnPFmSnzyAGPuXBn6eJQKJeK7OgcprVtKBrlzGmYlCHoJulXb42y6m-FfB0N0EfS6gFH6WPn6JuCNp6k-VFFCJH7rPr99mqfqpjLSB99PqP4PBLIj8KslwijORMV%7El7SdtSYwFLptt0yp6lOau2%7EVuzfJyOyYVp3IBAIii64aQrgSB0rCNOFDvYtlAjvURJLScHMmtmsfV8p4H4otOaL8PoLZWh%7E-RHZfWezs7ZikWfWQ4R9k3Qvf5IUROE2e1GJCTaV5uIHupfIBTQ2tUr6v21s1Opc7O6X4DpGq37rQ0lQYAUvzZBq9FMwZqi-6OUwn3o9eDFAyzSCg__&Key-Pair-Id=K24J24Z295AEI9: HTTPSConnectionPool(host='cdn-lfs-us-1.hf.co', port=443): Read timed out.
Trying to resume download...
/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/diffusers/configuration_utils.py:245: FutureWarning: It is deprecated to pass a pretrained model name or path to from_config.If you were trying to load a model, please use <class 'ComfyUI_EchoMimic.echomimic_v2.src.models.unet_2d_condition.UNet2DConditionModel'>.load_config(...) followed by <class 'ComfyUI_EchoMimic.echomimic_v2.src.models.unet_2d_condition.UNet2DConditionModel'>.from_config(...) instead. Otherwise, please make sure to pass a configuration dictionary instead. This functionality will be removed in v1.0.0.
deprecate("config-passed-as-path", "1.0.0", deprecation_message, standard_warn=False)
/home/ialover/document/ComfyUI/custom_nodes/ComfyUI_EchoMimic/EchoMimic_node.py:232: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
re_state = torch.load(re_ckpt, map_location="cpu")
loaded temporal unet's pretrained weights from /home/ialover/document/ComfyUI/models/echo_mimic/unet ...
Load motion module params from /home/ialover/document/ComfyUI/models/echo_mimic/v2/motion_module.pth
Loaded 453.20928M-parameter motion module
/home/ialover/document/ComfyUI/custom_nodes/ComfyUI_EchoMimic/EchoMimic_node.py:278: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
denoising_state = torch.load(denois_pt, map_location="cpu")
/home/ialover/document/ComfyUI/custom_nodes/ComfyUI_EchoMimic/EchoMimic_node.py:305: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
pose_state = torch.load(pose_encoder_pt)
/home/ialover/document/ComfyUI/custom_nodes/ComfyUI_EchoMimic/src/models/whisper/whisper/init.py:109: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(fp, map_location=device)
!!! Exception during processing !!! cannot import name 'EncoderDecoderCache' from 'transformers' (/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/transformers/init.py)
Traceback (most recent call last):
File "/home/ialover/document/ComfyUI/execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ialover/document/ComfyUI/execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ialover/document/ComfyUI/execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "/home/ialover/document/ComfyUI/execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ialover/document/ComfyUI/custom_nodes/ComfyUI_EchoMimic/EchoMimic_node.py", line 384, in main_loader
pipe = EchoMimicV2Pipeline(
^^^^^^^^^^^^^^^^^^^^
File "/home/ialover/document/ComfyUI/custom_nodes/ComfyUI_EchoMimic/echomimic_v2/src/pipelines/pipeline_echomimicv2.py", line 57, in init
self.register_modules(
File "/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/diffusers/pipelines/pipeline_utils.py", line 159, in register_modules
library, class_name = _fetch_class_library_tuple(module)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 730, in _fetch_class_library_tuple
not_compiled_module = _unwrap_model(module)
^^^^^^^^^^^^^^^^^^^^^
File "/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 236, in _unwrap_model
from peft import PeftModel
File "/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/peft/init.py", line 22, in
from .auto import (
File "/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/peft/auto.py", line 32, in
from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING
File "/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/peft/mapping.py", line 25, in
from .mixed_model import PeftMixedModel
File "/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/peft/mixed_model.py", line 29, in
from .peft_model import PeftModel
File "/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/peft/peft_model.py", line 37, in
from transformers import Cache, DynamicCache, EncoderDecoderCache, PreTrainedModel
ImportError: cannot import name 'EncoderDecoderCache' from 'transformers' (/home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages/transformers/init.py)

修正了transformers,但是如下:
(comfyui) ~ ➤ pip install transformers==4.38.2
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Collecting transformers==4.38.2
Using cached https://pypi.tuna.tsinghua.edu.cn/packages/b6/4d/fbe6d89fde59d8107f0a02816c4ac4542a8f9a85559fdf33c68282affcc1/transformers-4.38.2-py3-none-any.whl (8.5 MB)
Requirement already satisfied: filelock in ./anaconda3/envs/comfyui/lib/python3.12/site-packages (from transformers==4.38.2) (3.16.1)
Requirement already satisfied: huggingface-hub<1.0,>=0.19.3 in ./anaconda3/envs/comfyui/lib/python3.12/site-packages (from transformers==4.38.2) (0.26.5)
Requirement already satisfied: numpy>=1.17 in ./anaconda3/envs/comfyui/lib/python3.12/site-packages (from transformers==4.38.2) (1.26.4)
Requirement already satisfied: packaging>=20.0 in ./anaconda3/envs/comfyui/lib/python3.12/site-packages (from transformers==4.38.2) (24.2)
Requirement already satisfied: pyyaml>=5.1 in ./anaconda3/envs/comfyui/lib/python3.12/site-packages (from transformers==4.38.2) (6.0.2)
Requirement already satisfied: regex!=2019.12.17 in ./anaconda3/envs/comfyui/lib/python3.12/site-packages (from transformers==4.38.2) (2024.11.6)
Requirement already satisfied: requests in ./anaconda3/envs/comfyui/lib/python3.12/site-packages (from transformers==4.38.2) (2.32.3)
Requirement already satisfied: tokenizers<0.19,>=0.14 in ./anaconda3/envs/comfyui/lib/python3.12/site-packages (from transformers==4.38.2) (0.15.2)
Requirement already satisfied: safetensors>=0.4.1 in ./anaconda3/envs/comfyui/lib/python3.12/site-packages (from transformers==4.38.2) (0.4.5)
Requirement already satisfied: tqdm>=4.27 in ./anaconda3/envs/comfyui/lib/python3.12/site-packages (from transformers==4.38.2) (4.67.1)
Requirement already satisfied: fsspec>=2023.5.0 in ./anaconda3/envs/comfyui/lib/python3.12/site-packages (from huggingface-hub<1.0,>=0.19.3->transformers==4.38.2) (2024.9.0)
Requirement already satisfied: typing-extensions>=3.7.4.3 in ./anaconda3/envs/comfyui/lib/python3.12/site-packages (from huggingface-hub<1.0,>=0.19.3->transformers==4.38.2) (4.12.2)
Requirement already satisfied: charset-normalizer<4,>=2 in ./anaconda3/envs/comfyui/lib/python3.12/site-packages (from requests->transformers==4.38.2) (3.4.0)
Requirement already satisfied: idna<4,>=2.5 in ./anaconda3/envs/comfyui/lib/python3.12/site-packages (from requests->transformers==4.38.2) (3.10)
Requirement already satisfied: urllib3<3,>=1.21.1 in ./anaconda3/envs/comfyui/lib/python3.12/site-packages (from requests->transformers==4.38.2) (1.26.20)
Requirement already satisfied: certifi>=2017.4.17 in ./anaconda3/envs/comfyui/lib/python3.12/site-packages (from requests->transformers==4.38.2) (2024.8.30)
Installing collected packages: transformers
Attempting uninstall: transformers
Found existing installation: transformers 4.39.3
Uninstalling transformers-4.39.3:
Successfully uninstalled transformers-4.39.3
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
colossalai 0.4.6 requires diffusers==0.29.0, but you have diffusers 0.30.0 which is incompatible.
colossalai 0.4.6 requires torch<=2.4.1,>=2.2.0, but you have torch 2.5.1 which is incompatible.
colossalai 0.4.6 requires transformers==4.39.3, but you have transformers 4.38.2 which is incompatible.
Successfully installed transformers-4.38.2

你的报错段先是无法联网下载模型,然后就是transformer版本的问题,包括 peft库。然后注意Linux的ffmpeg也要设置好。

你的报错段先是无法联网下载模型,然后就是transformer版本的问题,包括 peft库。然后注意Linux的ffmpeg也要设置好。

虽然无法下载模型,但是后面下载好了,这个现在应该是没有问题的,如下图
image
然后这两个库,我该如何知道合适的版本呢, 下面是我环境中库文件的版本:
(comfyui) v2 ➤ pip show transformers git:master
Name: transformers
Version: 4.38.2
Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow
Home-page: https://github.com/huggingface/transformers
Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors)
Author-email: transformers@huggingface.co
License: Apache 2.0 License
Location: /home/ialover/anaconda3/envs/comfyui/lib/python3.12/site-packages
Requires: filelock, huggingface-hub, numpy, packaging, pyyaml, regex, requests, safetensors, tokenizers, tqdm
Required-by: colossalai, galore-torch, peft
(comfyui) v2 ➤ cd git:master
(comfyui) ~ ➤ cat echomimi.txt
absl-py==2.1.0
accelerate==1.2.0
aiohappyeyeballs==2.4.4
aiohttp==3.11.10
aiosignal==1.3.1
albucore==0.0.16
albumentations==1.4.15
annotated-types==0.7.0
antlr4-python3-runtime==4.9.3
anyio==4.7.0
argostranslate==1.9.6
asttokens==3.0.0
async-timeout==5.0.1
attrs==24.2.0
audioread==3.0.1
audiosegment==0.23.0
av==14.0.1
bcrypt==4.2.1
beautifulsoup4==4.12.3
bitsandbytes==0.45.0
botocore==1.35.80
certifi==2024.8.30
cffi==1.17.1
cfgv==3.4.0
charset-normalizer==3.4.0
click==8.1.7
color-matcher==0.5.0
colorama==0.4.6
coloredlogs==15.0.1
colossalai==0.4.6
colour-science==0.4.6
conformer==0.3.2
contexttimer==0.3.3
contourpy==1.3.1
cryptography==44.0.0
ctranslate2==4.5.0
cycler==0.12.1
Cython==3.0.11
datasets==3.2.0
ddt==1.7.2
decorator==5.1.1
deep-translator==1.11.4
deepspeed==0.16.1
Deprecated==1.2.15
diffusers==0.30.0
dill==0.3.8
distlib==0.3.9
docutils==0.21.2
easydict==1.13
einops==0.8.0
eval_type_backport==0.2.0
executing==2.1.0
fabric==3.2.2
fastapi==0.115.6
ffmpeg-python==0.2.0
filelock==3.16.1
flatbuffers==24.3.25
flet==0.25.1
fonttools==4.55.3
frozenlist==1.5.0
fsspec==2024.9.0
ftfy==6.3.1
future==1.0.0
galore-torch==1.0
gdown==5.2.0
gitdb==4.0.11
GitPython==3.1.43
google==3.0.0
googletrans-py==4.0.0
grpcio==1.68.1
grpcio-tools==1.68.1
h11==0.14.0
h2==4.1.0
hjson==3.1.0
hpack==4.0.0
httpcore==1.0.7
httpx==0.28.1
huggingface-hub==0.26.5
humanfriendly==10.0
hydra-core==1.3.2
hyperframe==6.0.1
HyperPyYAML==1.2.2
icecream==2.1.3
identify==2.6.3
idna==3.10
imageio==2.36.1
imageio-ffmpeg==0.5.1
importlib_metadata==8.5.0
inflect==7.4.0
insightface==0.7.3
invoke==2.2.0
ipython==8.30.0
jax==0.4.37
jaxlib==0.4.36
jedi==0.19.2
jieba==0.42.1
Jinja2==3.1.4
jmespath==1.0.1
joblib==1.4.2
jsonschema==4.23.0
jsonschema-specifications==2024.10.1
kiwisolver==1.4.7
kornia==0.7.4
kornia_rs==0.1.7
lazy_loader==0.4
librosa==0.10.2.post1
lightning==2.4.0
lightning-utilities==0.11.9
llvmlite==0.43.0
lpips==0.1.4
Markdown==3.7
markdown-it-py==3.0.0
MarkupSafe==3.0.2
matplotlib==3.9.4
matplotlib-inline==0.1.7
matrix-client==0.4.0
mdurl==0.1.2
mediapipe==0.10.18
ml_dtypes==0.5.0
modelscope==1.21.0
more-itertools==10.5.0
moviepy==2.1.1
mpmath==1.3.0
msgpack==1.1.0
mss==10.0.0
multidict==6.1.0
multiprocess==0.70.16
networkx==3.4.2
ninja==1.11.1.2
nodeenv==1.9.1
numba==0.60.0
numpy==1.26.4
nvidia-cublas-cu12==12.4.5.8
nvidia-cuda-cupti-cu12==12.4.127
nvidia-cuda-nvrtc-cu12==12.4.127
nvidia-cuda-runtime-cu12==12.4.127
nvidia-cudnn-cu12==9.1.0.70
nvidia-cufft-cu12==11.2.1.3
nvidia-curand-cu12==10.3.5.147
nvidia-cusolver-cu12==11.6.1.9
nvidia-cusparse-cu12==12.3.1.170
nvidia-ml-py==12.560.30
nvidia-nccl-cu12==2.21.5
nvidia-nvjitlink-cu12==12.4.127
nvidia-nvtx-cu12==12.4.127
oauthlib==3.2.2
omegaconf==2.3.0
onnx==1.17.0
onnxruntime==1.20.1
onnxruntime-gpu==1.20.1
openai-whisper==20240930
opencv-contrib-python==4.10.0.84
opencv-python==4.10.0.84
opencv-python-headless==4.10.0.84
opt_einsum==3.4.0
packaging==24.2
pandas==2.2.3
paramiko==3.5.0
parso==0.8.4
peft==0.14.0
pexpect==4.9.0
pillow==10.4.0
pip==24.3.1
pixeloe==0.0.10
platformdirs==4.3.6
plumbum==1.9.0
pooch==1.8.2
pre_commit==4.0.1
prettytable==3.12.0
proglog==0.1.10
prompt_toolkit==3.0.48
propcache==0.2.1
protobuf==4.25.5
psutil==6.1.0
ptyprocess==0.7.0
pure_eval==0.2.3
py-cpuinfo==9.0.0
pyarrow==18.1.0
pyarrow-hotfix==0.6
pycparser==2.22
pydantic==2.10.3
pydantic_core==2.27.1
pydub==0.25.1
PyGithub==2.5.0
Pygments==2.18.0
PyJWT==2.10.1
PyMatting==1.1.13
PyNaCl==1.5.0
pynvml==12.0.0
pyparsing==3.2.0
pypinyin==0.53.0
PySocks==1.7.1
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
pytorch-lightning==2.4.0
pytz==2024.2
PyWavelets==1.8.0
PyYAML==6.0.2
ray==2.40.0
referencing==0.35.1
regex==2024.11.6
rembg==2.0.60
repath==0.9.0
requests==2.32.3
rich==13.9.4
rpds-py==0.22.3
rpyc==6.0.0
ruamel.yaml==0.18.6
ruamel.yaml.clib==0.2.12
sacremoses==0.0.53
safetensors==0.4.5
scikit-image==0.24.0
scikit-learn==1.6.0
scipy==1.14.1
seaborn==0.13.2
sentencepiece==0.2.0
setuptools==75.6.0
shellingham==1.5.4
six==1.17.0
smmap==5.0.1
sniffio==1.3.1
sounddevice==0.5.1
soundfile==0.12.1
soupsieve==2.6
soxr==0.5.0.post1
spandrel==0.4.0
srt==3.5.3
stack-data==0.6.3
stanza==1.1.1
starlette==0.41.3
sympy==1.13.1
tensorboard==2.18.0
tensorboard-data-server==0.7.2
threadpoolctl==3.5.0
tifffile==2024.12.12
tiktoken==0.8.0
timm==1.0.12
tokenizers==0.15.2
torch==2.5.1
torchaudio==2.5.1+cu124
torchmetrics==1.6.0
torchsde==0.2.6
torchtyping==0.1.5
torchvision==0.20.1+cu124
tqdm==4.67.1
traitlets==5.14.3
trampoline==0.1.2
transformers==4.39.3
transparent-background==1.3.3
triton==3.1.0
typeguard==2.13.3
typer==0.15.1
typing_extensions==4.12.2
tzdata==2024.2
ultralytics==8.2.84
ultralytics-thop==2.0.13
urllib3==1.26.20
uvicorn==0.29.0
virtualenv==20.28.0
wcwidth==0.2.13
webrtcvad==2.0.10
Werkzeug==3.1.3
wget==3.2
wheel==0.45.1
wrapt==1.17.0
xxhash==3.5.0
yarl==1.18.3
zipp==3.21.0
(comfyui) ~ ➤

谢谢作者!

我需要看报错信息,库我不需要

我需要看报错信息,库我不需要

from transformers import Cache, DynamicCache, EncoderDecoderCache, PreTrainedModel
ImportError: cannot import name 'EncoderDecoderCache' from 'transformers' 主要是这个,我原本的transformers 版本为4.39.3 ,然后我修正这个库为transformers 4.38.2 ,还是出现一样的错误,要是能给出一些关键库的版本的(linux),或者能给出能跑完的全环境包的版本就更好了,已经为作者star,谢谢

按ehco官方的需求,>=4.46.3

e

按ehco官方的需求,>=4.46.3

嗯可以了,非常感谢作者