StartHua/Comfyui_CXH_joy_caption
Recommended based on comfyui node pictures:Joy_caption + MiniCPMv2_6-prompt-generator + florence2
PythonApache-2.0
Issues
- 1
# ComfyUI Error Report ## Error Details - **Node Type:** Joy_caption - **Exception Type:** OSError - **Exception Message:** Error no file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory D:/comfyui/models/clip\siglip-so400m-patch14-384.
#99 opened by wbqb986 - 0
florence_nodes.py 找不到本地LLM文件夹
#112 opened by Nutingnon - 7
Joy_caption_alpha_run You can't move a model that has some modules offloaded to cpu or disk.
#90 opened by lestersssss - 0
- 0
mac版本的comfyui客户端,运行了requirement.txt后,还是无法运行节点
#110 opened by lcw552003428 - 2
Error no file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory D:/comfyui/models/clip\siglip-so400m-patch14-384
#101 opened by 158zd - 0
Download and load the Florence2 model
#109 opened by DNPMBHC - 0
CHX_Florence2Run节点需要输入一个seed,如何连接?之前不用的啊。
#108 opened by kame-boop - 5
This modeling file requires the following packages that were not found in your environment: flash_attn. Run `pip install flash_attn`
#91 opened by mr-bob-chang - 0
- 1
Joy_caption_alpha_load Descriptors cannot be created directly. If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0. If you cannot immediately regenerate your protos, some other possible workarounds are: 1. Downgrade the protobuf package to 3.20.x or lower. 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower). More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
#85 opened by 108863258 - 0
GGUF ?
#106 opened by Metal-dude - 4
- 0
- 6
Joy_caption Error(s) in loading state_dict for ImageAdapter: Unexpected key(s) in state_dict: "other_tokens.weight".
#72 opened by xiewci - 1
batch image interrogate error
#71 opened by hopeyan476868 - 0
Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument tensors in method wrapper_CUDA_cat)
#104 opened by 1247862674 - 0
- 0
大佬,啥时候加 ic-lora 格式的提示词啊?
#102 opened by 807502278 - 2
依赖版本太老,可以自己修改吗?
#70 opened by K-O-N-B - 1
- 2
required input is missing: seed
#100 opened by realyn - 3
错误:在目录 C:/comfyui/models/clip\siglip-so400m-patch14-384 中找不到名为 pytorch_model.bin、model.safetensors、tf_model.h5、model.ckpt.index 或 flax_model.msgpack 的文件。
#60 opened by jiej32228 - 1
CXH_Florence2Run节点在linux里有问题
#80 opened by 807502278 - 1
大佬,请问如何实现Joy_caption_v2批量打标签呢?
#98 opened by YoucanBaby - 2
- 2
使用Meta-Llama-3.1-8B-bnb-4bit加载Joycaption的时候`rope_scaling` must be a dictionary with two fields, `type` and `factor`, got {'factor': 8.0, 'high_freq_factor': 4.0, 'low_freq_factor': 1.0, 'original_max_position_embeddings': 8192, 'rope_type': 'llama3'}这个报错
#67 opened by Pondowner857 - 0
- 0
control character (\u0000-\u001F) found while parsing a string at line 16971
#96 opened by xuhuanquan - 0
V2不能使用
#95 opened by chenpipi0807 - 0
NEED SDWebUI version!!!
#94 opened by Zhuangvictor0 - 0
When multiple GPUs report errors.
#93 opened by tiandaoyuxi - 2
Joy_caption_alpha_run 加载错误
#89 opened by LiuGe126 - 0
- 0
Remove specific requirement versions
#87 opened by EnragedAntelope - 1
Joy_caption_alpha_prompt这个节点从name开始所有选项错了一位。
#83 opened by DragonQuix - 1
安装依赖的时候,其中llama-cpp-python==0.2.89这个安装错误啊
#86 opened by doorle - 0
关于chat
#84 opened by Chengym2023 - 3
ValueError: list.remove(x): x not in list -- Error when using Florence models
#79 opened by patriciagomesoo - 2
joy caption alpha two support
#75 opened by soldivelot - 0
- 1
提示protobuf版本问题,但是检测系统版本又是对的,不知道问题在哪?求教
#76 opened by qingdengke88 - 0
support llama3.2 please
#78 opened by Lagrebanana - 1
Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit the quantized model.
#74 opened by yatoubusha - 6
joy caption错误,用的bnb-4bit模型
#68 opened by K-O-N-B - 0
How to fix this ?
#69 opened by vihangasa14 - 1
能不能加一个检测到已经拥有tag文件是否可以选择覆盖或者追加按钮,因为这真的很需要
#65 opened by lixida123 - 0
vision_config is None, using default vision config Unused kwargs: ['_load_in_4bit', '_load_in_8bit', 'quant_method']. These kwargs are not used in . `low_cpu_mem_usage` was None, now set to True since model is quantized.
#63 opened by water110 - 0
What's the input parameter temperature?
#62 opened by ykhasia - 0
This error originates from a subprocess, and is likely not a problem with pip.
#61 opened by kuzhushidai