Issues
- 1
Http Local server
#61 opened by Khampol - 0
怎么才能把数据传递给Omost 加载背景条件的json串
#60 opened by HiyungXu - 1
- 1
ollama support?
#11 opened by jesenzhang - 0
More examples? Workflows
#59 opened by ClothingAI - 1
- 0
您好,喜欢这个项目,但发现与目前的FLUX不兼容
#58 opened by wuliang19869312 - 0
运行LLMchat无结果Running LLMchat yields no results
#57 opened by Eikwang - 9
- 0
要是flux能用就好了
#56 opened by zergqyue - 2
AttributeError: 'NoneType' object has no attribute 'cdequantize_blockwise_fp32'
#45 opened by jadechip - 0
- 0
Achieving llama.cpp Speed with Lower VRAM Requirements using llama-cpp-python
#53 opened by Rio77Shiina - 0
How to running text-generation-ifenrence(TGI) with downloaded model parameters?
#50 opened by zk19971101 - 2
- 0
- 1
I hope to install an offline model
#32 opened by smae08 - 1
Please help me, KeyError: 'llama'
#43 opened by Kevinjuly - 1
Could you update a installation......?
#47 opened by K-O-N-B - 0
- 9
About saving prompt words
#46 opened by smae08 - 1
Hope mac arm support
#37 opened by alexcc4 - 1
Is it possible to set canvas size?
#44 opened by elphamale - 0
未来会支持SD3吗?
#42 opened by dk19930125 - 1
prompt generation very slow
#41 opened by sipie800 - 1
- 0
Hope to support SD3
#38 opened by FLYleoYBQ - 1
It doesn't seem like it's just me who doesn't know where the llm model should go.
#23 opened by chenpipi0807 - 0
- 1
Using `low_cpu_mem_usage=True` or a `device_map` requires Accelerate: `pip install accelerate`
#30 opened by satangel2222 - 2
RuntimeError: Only Tensors of floating point and complex dtype can require gradients
#24 opened by wildage - 1
这个节点后面会考虑加入多轮对话编辑图片吗?
#29 opened by MoonDownHugo - 4
OmostLoadCanvasConditioningNode Error
#4 opened by MrGingerSir - 0
More Inference options
#15 opened by zdaar - 1
输入过长就报错
#19 opened by ailfreedom - 0
加载phi报错
#20 opened by caissonfiv - 5
Error running "Omost LLM Loader"
#5 opened by xiao-ning-ning - 3
No package metadata was found for bitsandbytes
#10 opened by ziziguai - 2
示范里的27k工作流里的[ImageFromBatch+]节点没办法找到
#18 opened by dxyklp - 1
- 1
- 1
- 0
希望可以适配HunYuanDit大模型,更适合**宝宝
#9 opened by FLYleoYBQ - 0
Detected int4 weights but the version of bitsandbytes is not compatible with int4 serialization.
#8 opened by Cladislav - 1
Failed to import transformers.models.cohere.configuration_cohere because of the following error (look up to see its traceback): No module named 'transformers.models.cohere.configuration_cohere'
#7 opened by alenhrp - 1
how to use
#6 opened by CJT666 - 0
- 2
为啥我第三个节点和你的不一样
#2 opened by linjian-ufo - 1