Issues
- 11
[Bug] InternLM2 int4 出现重复说话、重复前置内容(system prompt)现象
#706 opened by sanbuphy - 0
[QA] 设置do_sample=False(贪婪解码),下的解码问题.
#738 opened by fxb392 - 6
- 0
[Feature] 一个把internLM2 LoRA部分转换成llama风格的脚本
#737 opened by ChengYouFancy - 3
- 2
[Feature] 是否已经支持tensorrt-llm或者计划支持?
#714 opened by mblank5 - 3
[QA] 请问如何在昇腾910上进行模型微调?
#736 opened by rourouZ - 2
[QA] 使用InternLM微调qa机器人,如何让它从训练资料中选择答案,而不是自己生成?
#728 opened by ILG2021 - 6
[QA] Can internlm2 be supported in fastchat?
#723 opened by WuLindong1997 - 3
[Bug] safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge
#731 opened by yuanphoenix - 1
[Feature] convert2llama.py
#729 opened by bladeswill - 2
[Bug] internlm2_chat_1.8b 模型不支持多轮对话
#726 opened by hello-gary-2022 - 6
[Bug] llama.cpp internlm2 function calling bug
#696 opened by bdqfork - 5
[QA] 书生2模型有关chat_template的问题
#700 opened by DirtyKnightForVi - 6
[QA] InternLM 2 对文字种类的识别, 生成能力以及微调相关问题
#704 opened by timousT - 1
[QA] 请问是否会开源PPO的训练code和reward model?
#712 opened by LYMDLUT - 5
请问是否支持在MindSpore的910A或者910B上部署?
#698 opened by ChingKwanCheung - 3
- 5
一人血书求讲解怎么来洗数据!
#718 opened by Jianfeng777 - 0
万人血书 InternLM2-4B ❗❕❗❕❗
#719 opened by SaaRaaS-1300 - 2
请问是否支持200k上文的微调,需要什么样的配置?[QA]
#702 opened by Labmem009 - 2
[Bug] Special tokens are still mismatched.
#715 opened by Li-Qingyun - 0
[Bug] internlm2-chat-20b huggingface下载503报错,模型不存在
#716 opened by DefTruth - 19
- 1
[Bug] When loading a model by using transformers and using stream chat, it seems no whitespace character in English response.
#709 opened by zhulinJulia24 - 2
[Bug] 微调eval阶段使用generate的结果会出现</s>
#701 opened by AZYoung233 - 2
[Bug] `Unrecognized configuration class Error` returned by `AutoTokenizer.from_pretrained` with InternLM-chat-1.8b-sft (transformers==4.36)
#688 opened by openmmlab-bot - 3
[QA] 如果想基于internLM使用领域数据微调出一个领域模型,应该使用internLM-20B-chat还是internLM-20B-sft?
#694 opened by honglianglv - 2
[QA] 如何提升推理速度?
#697 opened by luzhongyu - 6
[QA] 为什么我通过lmdepoly的教程部署后,输出的是一堆乱码呢,具体图片如下,感谢帮助
#665 opened by sunshicheng1 - 4
[Bug] 运行魔塔示例代码出现 AttributeError: 'ChatStreamer' object has no attribute 'cache'
#668 opened by tcexeexe - 5
- 5
[Bug] AttributeError: module 'tokenizers.decoders' has no attribute 'Replace'
#655 opened by jiaoyang3 - 4
- 2
卡住问题和max_position_embeddings[QA]
#690 opened by xxg98 - 0
[QA] tokenizer对<|im_start|>的特殊编码不对
#693 opened by shipengai - 2
InternLM2 下载模型后,用本地路径加载tokenizer和模型报错
#691 opened by syp1997 - 7
[Bug] 模型推理中止
#669 opened by Patrick-Ni - 0
[QA] InternLM2微调时数据的max_token
#681 opened by MING-ZCH - 1
internlm2-chat-7b本身支持的token多大?[QA]
#683 opened by xxg98 - 5
[Bug] LMdeploy跑出来的demo有问题
#684 opened by duliangang - 2
加载internlm2-7b-chat报错[Bug]
#677 opened by xxg98 - 2
api_server无法添加--session-len参数[Bug]
#678 opened by xxg98 - 4
[QA] 模型在推理时输出长度有限,是否有相关的参数能控制输出长度
#675 opened by ChingKwanCheung - 1
直接使用NTK支持到200K吗?是否在200k的上下文上做过训练?有没有超长文本的评测报告
#676 opened by lvjianxin - 1
[Feature] MOE多模态模型 许愿
#670 opened by sanbuphy - 4
[QA] transformers>= 4.34 高版本执行报错CUDA error: device-side assert triggered
#659 opened by zhulinJulia24 - 3
长上文的微调之后还会出教程吗?比如100k的指令微调语料大概需要多少显存,训练策略要选择什么?[QA]
#661 opened by Labmem009 - 4
- 4
[Bug] internlm2-20B 推理时出现乱码;internlm2-7B能够正常推理。
#657 opened by zexuanqiu