Issues
- 15
[Feature Request]: 哭着求MiniCPM 4和Ollama的集成?
#204 opened by fishfree - 2
[Bug]: lora微调模型无法在vllm加载
#241 opened by Caismis - 2
[Bad Case]: 为什么推理速度比9b模型都要慢很多
#191 opened by lixiaoyuan1029 - 6
生成速度比Qwen2 7B还慢
#202 opened by lucasjinreal - 2
[Bad Case]: error
#195 opened by lhjlhj11 - 2
[Bad Case]: ERROR occured when convert miniCPM3 model to GGUF format with llama.cpp, WHY?
#226 opened by ridincal - 8
[Bad Case]: MIniCPM3 原始pytorch .bin文件转为gguf失败
#212 opened by sataliulan - 1
- 1
[Bug]: docker容器版本的sglang不支持minicpm3-4b
#227 opened by cicicji - 1
- 4
使用llamafactory微调报错
#238 opened by lifelsl - 5
[Feature Request]: 工具调用可以支持openai 的api function call吗
#239 opened by lonngxiang - 0
V1.0测试数据和脚本的链接失效
#240 opened by MAxx8371 - 1
- 2
- 1
ImportError: cannot import name 'SamplerOutput' from 'vllm.sequence' (/root/miniconda3/lib/python3.11/site-packages/vllm/sequence.py
#236 opened by badarrrr - 1
[Bug]: WSL下安装 vllm報錯
#232 opened by h122skite - 2
Denis
#235 opened by Denisdiabate22 - 3
[Bad Case]: 在机器上部署reranker模型之后,请求的时候报错了
#230 opened by TOMATODA - 1
- 2
MiniCPM3微调
#229 opened by lifelsl - 1
MiniCPM可以微调做文本分类任务吗
#219 opened by lifelsl - 4
[Bad Case]: function calling可以不用vllm吗?
#203 opened by cristianohello - 2
[Bug]: openai_api_server_demo.generate_minicpm:158 - Input length larger than 20
#211 opened by cxgreat2014 - 1
[Bad Case]: 运行Function call 简易实现代码出错,未调用相当的Function
#218 opened by heiheiheibj - 1
[Bad Case]: readme中llama.cpp的安装顺序是否应该调整
#221 opened by SuperAZHE - 3
[Bug]: 按照官方vllm装不上的解答步骤,未运行成功
#222 opened by sssuperrrr - 0
能详细展开说一下提出的 “LLM x MapReduce” 么
#225 opened by zhjunqin - 0
结构设计
#223 opened by slZheng077 - 0
持续预训练阶段的数据配比
#220 opened by zyzyyy123 - 3
[Feature Request]: emebdding&reranker服务问题
#207 opened by lyj157175 - 2
[Bug]: TypeError: 'ChatMessage' object is not subscriptable when MODEL_PATH=openbmb/MiniCPM3-4B
#216 opened by cxgreat2014 - 5
[Bad Case]: python=3.11无法运行openai_api_server.py
#215 opened by exthirteen - 2
可否提供一个 无限长上下文 MapReduce 的 sample code?
#201 opened by msxfXF - 1
What is llmxmapreduce? Any reference?
#197 opened by world2vec - 3
[Feature Request]: 请教关于显存和LLM x MapReduce的问题
#208 opened by wciq1208 - 0
看到这个logo我就直接加星了。
#214 opened by WeileiZeng - 0
[Feature Request]: 模型微调
#213 opened by aph-asic - 4
[Bug]: Model architectures ['MiniCPM3ForCausalLM'] are not supported for now.
#199 opened by qq745639151 - 0
请问如何配置上下文大于32k?
#210 opened by thunder95 - 8
[Feature Request]: vllm 能直接用functioncall?
#198 opened by lonngxiang - 5
[Feature Request]: 工具调用微调
#205 opened by douzi0248 - 1
[Bad Case]: function 多agent运行报错
#206 opened by lonngxiang - 1
[Feature Request]: 笔记本NPU如何调用MiniCPM
#200 opened by R0k1e - 1
[Feature Request]: 可否提供正確的CUDA version, PyTorch version, 以及 deepspeed version?
#194 opened by joyyang1215 - 1
vllm运行你们给的demo需要多少显存
#193 opened by lifelsl - 1
为了方便技术交流,拉了一个多模态大模型技术交流群,有需要的大家可以加入
#192 opened by feihuamantian - 0
请教一下你们预训练用了1.1T tokens,花了多少GPU和时间
#190 opened by zyh3826 - 1
[Bad Case]: 多模态 MiniCPM-V 推理报错
#188 opened by c122-ode - 2
使用vllm推理时出现错误
#189 opened by lifelsl