Issues
- 0
[BUG] qwen-vl 第一阶段训练loss下降后升高了,模型训崩了
#358 opened by liuheng0111 - 0
- 0
- 0
请说人话
#383 opened by HelloWorldCoder-China - 1
[BUG] <没有按照提示词要求输出指定内容>
#380 opened by ybshaw - 0
- 1
[BUG] <title>Lora训练的时候想同时打开其他部分进行非lora训练,该怎么做
#378 opened by jweihe - 0
生成的图片如何获取呢?
#381 opened by bob-liu-1990 - 0
无法加载模型
#376 opened by wanghaoran-ucas - 1
[BUG] <使用ModelScope进行推理报错:非法指令 (核心已转储)>
#356 opened by hwddzx - 0
[BUG] <title> Unable to load trained LoRa model weights using AutoPeftModelForCausalLM.from_pretrained()
#379 opened by jweihe - 2
【疑问】使用web的图片理解问答和API调用结果差距显著
#343 opened by gjhhust - 4
[BUG] RuntimeError: GET was unable to find an engine to execute this computation
#339 opened by shiqwang - 2
[BUG] <title> AutoGPTQForCausalLM.from_quantized( "Qwen/Qwen-VL-Chat-Int4", 。。。) 报错
#371 opened by xiayq1 - 1
RuntimeError: "_amp_foreach_non_finite_check_and_unscale_cuda" not implemented for 'BFloat16'
#372 opened by Qinger27 - 1
如何微调使Qwen-VL可以做图文检索任务呢?
#375 opened by MayCloud052 - 2
- 1
- 0
💡 [输入图片的方法] - 请问支持以编码形式输入图片吗?比如base64的图片编码
#373 opened by sunmoon-1024 - 3
💡 [REQUEST] - <title>怎么能够继续昨天的训练,继续训练
#359 opened by sunjunlishi - 2
- 0
请问论文中Table 10报告的global batch size是否包含了梯度累计次数?
#370 opened by YanqiDai - 1
使用3000份数据全量参数微调10个epoch后依然有很大的幻觉,并且一个回答里面会重复生成
#365 opened by yuemengrui - 4
loss 在第二个step就变成了0
#352 opened by TAOSHss - 1
[BUG] <title>是否支持一次Lora的基础上再次Lora
#360 opened by todayplusplus - 0
- 0
Qwen-VL如何实现这种工业检测效果anomalib
#368 opened by monkeycc - 2
如何用lora微调qwen-vl模型,这个modules_to_save在哪里? chatml
#340 opened by OliverLeeXZ - 1
[BUG] <title> Qlora无法训练VL。 显示: we only spport quantization for text model. Support for vision, speech and multimodel will come later.
#367 opened by xiayq1 - 8
[BUG] <opanai_api.py无法分析本地图片>
#353 opened by qinzhenyi1314 - 0
本地推理模型报错[BUG] <title>
#366 opened by xiyangyang99 - 1
请问微调中使用本地数据时配置json格式的文件中的id参数每个样本可以设置为一样的吗?
#341 opened by whysirier - 0
- 0
请教一下,Qwen1.5的tokenizer如何上传文件。
#362 opened by shenyugub - 0
[问题] 咨询问题,在2张4090的显卡上能不能微调训练?
#357 opened by wangqinghuan - 0
- 1
Beyond被Qwen-VL-Max认为是禁止词
#347 opened by JY9087 - 3
[BUG] <title>两张显卡,各16G,共32.768G显存,推理报显存不足的错误
#338 opened by zhaofangtao - 3
💡 [REQUEST] - <title> 请问何时能支持vllm部署呢
#336 opened by su-zelong - 1
[BUG] qwen-vl大概什么能支持llama.cpp转换,或者有没有工具转gguf或ggml格式
#344 opened by fan-chao - 0
[BUG] <多样本推理,只有第一次的结果合理>
#355 opened by xinghedyc - 0
evaluate_caption.py测试,已经下载好converted files数据,数据地址也对应上,但是No such file or directory: 'data/nocaps/val/0013ea2087020901.jpg' ,是需要下载images吗?
#335 opened by AlexMa0 - 0
[BUG] <title>使用Qwen-VL-Chat-Int4报错
#351 opened by lzh1998-jansen - 0
--model_max_length是输出的文本长度吗?训练时调小可以节省内存吗?
#350 opened by chuangzhidan - 0
[BUG] <title>微调vl的全部参数出现错误 terminate called after throwing an instance of 'c10::Error' what(): CUDA error: an illegal memory access was encountered Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
#349 opened by chuangzhidan - 1
千文VL模型的三阶段和二阶段微调
#346 opened by lyc728 - 0
[疑问或特性请求] prompt 中能够使用已经读入内存的图像数据吗?
#345 opened by shtu-ryan - 2
千文模型的三阶段和二阶段微调
#337 opened by lyc728 - 0
Is that safe to use the <|extra_{k}|> special token?
#342 opened by xwk - 7
测试数据格式询问evaluate_grounding.py
#334 opened by yihp