baichuan-inc/Baichuan-13B
A 13B large language model developed by Baichuan Intelligent Technology
PythonApache-2.0
Issues
- 0
npu部署
#206 opened by httang1224 - 0
如何加速模型推理速度?
#205 opened by hunfwj - 0
- 0
想问一下百川2量化版本的算法是什么?
#203 opened by huangxiancun - 2
百川的开源版本模型有计划增强一下function calling的能力吗?
#164 opened by huajianmao - 1
Baichuan13B vllm 效果很差
#202 opened by moseshu - 1
web_demo.py 运行时报错 CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`
#159 opened by matyhtf - 0
feat: function calling
#201 opened by wey-gu - 2
cli_demo.py 切换Baichuan-13B-Base 问答异常
#176 opened by cgq0816 - 0
将训练好的模型进行放入到web.demo中报错
#200 opened by ghh1125 - 0
- 0
v100能部署Baichuan-13B-Base么?
#197 opened by JasonFlyBeauty - 4
alibi的mask和论文不一致
#172 opened by ReactiveCJ - 0
各位大佬,微调baichuan2-13b后得到pth文件,该如何推理
#198 opened by dongdongqiang2018 - 2
本地部署版本问题
#196 opened by JasonFlyBeauty - 1
webdemo在多用户并发时,输出结果会变得很慢?
#178 opened by jamesruio - 0
baichuan-13b-chat批量生成示例
#195 opened by MrInouye - 2
并发请求大模型接口的时候,模型推理速度线性增加,有没有一个比较好的增加推理速度的方案?
#166 opened by chaotec - 0
baichuan2 mmlu结果复现的问题
#194 opened by zhanghan1992 - 0
请问下,大家都是租用GPU服务器来运行大模型吗
#193 opened by jiuwenyu - 0
这个模型不支持多gpu模式吗
#192 opened by 394988736 - 0
ValueError: The current `device_map` had weights offloaded to the disk. Please provide an `offload_folder` for them. Alternatively, make sure you have `safetensors` installed if the model you are using offers the weights in this format.
#191 opened by klj123wan - 1
如何离线部署?
#189 opened by wangnaihao - 1
有时候卡住一直在输出,model.chat一直在等待响应很久,有没有办法快速结束这种超长响应的办法
#182 opened by janglichao - 2
调用api.py流式代码谁能分享一下啊
#186 opened by xuyaokun - 2
ValueError: Tokenizer class BaichuanTokenizer does not exist or is not currently imported.
#190 opened by lonngxiang - 1
baichuan-13b-chat sft微调loss不下降
#188 opened by xiaohuihwh - 1
- 0
Alibi编码为什么和标准的Alibi编码不一致?
#185 opened by wx971025 - 0
在 prompt中增加长度限制,无法生效
#184 opened by Jessie37464 - 0
推理时使用BF16和FP32结果差距较大,这是否符合预期
#169 opened by NicholasYoungAI - 0
fasttransformer inference
#183 opened by HalcyonLiang - 0
如何把采样的随机种子固定?
#181 opened by wangzhijian-tal - 1
请问如何单卡运行模型推理
#180 opened by tanglu86 - 0
[Evaluation] 提供 Baichuan 模型在 OpenCompass 上的评测结果
#177 opened by Leymore - 0
使用baichuan-13b训练reward model loss不下降
#175 opened by zhangzuizui - 3
Baichuan tokenizer对诗词分词不准确
#161 opened by CanvaChen - 1
Deploy Failed
#174 opened by herdonyan - 1
Baichuan-13B-Chat对提出过的问题有没有记忆力?
#168 opened by drpanhuaming - 1
一台机器有2张RTX3060显卡,跑官方web_demo的时候报了异常
#171 opened by youyajike - 0
请问自动下载模型 下载到哪里去了?
#173 opened by lonely1215225 - 1
有人在macbook pro 16GB上跑过百川13B吗?
#165 opened by mosthandsomeman - 0
百川怎么做预训练或增量预训练????
#170 opened by ArtificialZeng - 2
- 0
每次提问页面都出现“好的”这两个多余的字
#167 opened by drpanhuaming - 0
词表里 \U0010fc06 和 <reserved_7> 这两类有什么妙用吗?
#162 opened by CanvaChen - 0
两张A10,可以进行baichuan-13b-chat的lora微调吗
#160 opened by suihuoliuying - 0
对大模型进行int8和int4量化后,如何保存模型?
#158 opened by wanglaiqi - 0
能否适配text-generation-webui的lora微调?
#157 opened by BUJIDAOVS - 0
推理速度这么慢,符合预期吗?
#156 opened by ggjge