Issues
- 2
[Badcase]: 使用Qwen2-7b翻译中文,有额外的输出
#1005 opened by cjjjy - 6
[Badcase]: qwen2.5-72b 在昇腾910推理结果不符合预期
#992 opened by tianshiyisi - 2
- 3
[Bug]: Qwen2 moe out of memory
#954 opened by FL77N - 3
- 1
[Bug]: vllm 启动,openai的swarm 函数调用不正常
#1015 opened by 18600709862 - 17
[Badcase]: 相同的数据,微调时在qwen2.5 72B预训练模型上的loss是qwen2 72B的3倍,请问2.5除了数据变多了,其他有什么不一样吗
#935 opened by boundles - 5
[Bug]: qwen2.5-72b-insruct math 自测分数和榜单分数差异较大
#1020 opened by tianshiyisi - 5
[Bug]: Nvidia L20推理Qwen2.5 72B GPTQ-Int8模型不符合预期
#1006 opened by renne444 - 17
[Badcase]: qwen2.5 instruct 14B SFT后解码重复
#957 opened by 520jefferson - 16
[Badcase]: Model inference Qwen2.5-32B-Instruct-GPTQ-Int4 appears as garbled text !!!!!!!!!!!!!!!!!!
#945 opened by zhanaali - 8
- 0
[Bug]: 在 4 卡 16GB V100 机器上采用 lmdeploy 部署 qwen2.5-32b-instruct-gptq-int4 模型,最高输出速度只有 80token/s ,请问这个速度正常吗?
#1023 opened by SolomonLeon - 0
Qwen2.5-1.5b用LLaMA-factory lora微调后,vLLM加载模型报错,求教~
#1022 opened by 2500035435 - 0
[Badcase]: qwen2.5一定概率生成\\n
#1021 opened by 520jefferson - 3
- 0
[Badcase]: 相同的微调数据,Qwen1.5 14B准确率比Qwen2.5 14B高20%左右,这是什么原因
#1016 opened by Jayc-Z - 0
[Bug]: No heartbeat received from MQLLMEngine
#1013 opened by hulk-zhk - 3
- 1
[REQUEST]: Qwen的性能报告能否把首Token延迟也提供一下
#1011 opened by zhufeizzz - 0
[Bug]: Qwen2.5-72B-instruct使用vllm部署通过函数调用输出的结果里所有汉字被转义了
#1009 opened by ericg108 - 1
[Bug]: vllm部署Qwen2.5-72B-Instruct压测出现报错
#1008 opened by WangJianQ-0118 - 0
[REQUEST]: Add finetuning scripts
#1007 opened by chansonzhang - 5
- 3
[Bug]: 文档问答会忽略部分数据,比如证书号是12345 回答的是2345
#997 opened by daimashenjing - 0
[Badcase]: 函数调用出现不正常token(iNdEx)
#991 opened by abiaoa1314 - 2
[Bug]: vllm部署后,用官方的例子调用报错openai.BadRequestError: Error code: 400 - {'object': 'error', 'message': "name 'Extension' is not defined", 'type': 'BadRequestError', 'param': None, 'code': 400}
#998 opened by 1gst - 0
关于function参数的格式问题
#994 opened by XuyangHao123 - 1
- 1
[Question]: 本地部署Qwen2后的文件上传问题
#983 opened by Patrick24080735 - 3
您好,请问Qwen2.5官方榜单中的评测数据集分别是使用多少shot评测的?
#931 opened by 13416157913 - 0
[Question]: 2*A100部署qwen2.5-72B-instruct的问题
#933 opened by lxb0425 - 1
[Question]: slow with xInference
#936 opened by Ty2000sdu - 4
[Question]: 如果将此模型用作翻译使用,我遇到了一些问题。
#937 opened by Leowolf93 - 1
- 0
[REQUEST]: Please share Fine tuning Qwen2-72b or Qwen2-72b-Math example code ?
#939 opened by ZhuJD-China - 0
- 0
- 0
[Question]: qwen2.5没考虑发布MoE架构的模型吗?
#944 opened by old-wang-95 - 0
[Question]: Suggestion to improve IFEval benchmark
#946 opened by LiweiPE - 1
[Question]: 如何使用qwen2模型进行mlm任务训练
#947 opened by lisi-lisi - 0
[Question]: 在Qwen2上finetune时(作为多模态的llm),每隔几个step会出现一次前向耗时异常(正常step的4-10倍左右),导致整体训练速度较慢
#948 opened by CSammyfd - 0
- 4
- 5
[Question]: 阿里云百炼平台发布的qwen2.5-72B和开源qwen2.5-72B能力差距大
#955 opened by darvsum - 0
[Question]: 微调
#959 opened by Pysion-lin - 0
[Question]: From which version at least, the vllm supports to infrence and serve Qwen2.5-14B-Instruct model?
#960 opened by zengqingfu1442 - 1
- 1
[Bug]: hugging face transformers使用长文本推理时报错
#953 opened by allen20200111 - 6
[Bug]: <|endoftext|> , <|endoftext|>Human: endless loop asking itself questions . . .
#932 opened by vishnunuk