Issues
- 2
大佬出个教程把
#171 opened by kingpingyue - 5
internlm-sft 单机多卡微调 GPU 利用率低
#170 opened by Shamepoo - 2
chatglm_v2_6b_lora多卡如何设置,没有找到
#162 opened by BQQQQB - 5
4张3080ti跑chatglm2-6b-lora报oom
#151 opened by imjking - 1
出个chatglm3的吧 微调后 推理老是出问题
#167 opened by yangliangguang - 1
大佬 chinese_llama 还可以用吗
#166 opened by kingpingyue - 0
Segment Fault 是哪的问题?
#165 opened by wanghaosjtu - 0
能出一个ChatGLM的教程吗
#164 opened by lzfeifei - 0
能出一个ChatGLM
#163 opened by lzfeifei - 0
大佬,可以多个多个lora叠加使用吗?
#161 opened by worm128 - 1
救命!!ChatGlm-v2-6b_Lora该怎么设置epoch??
#160 opened by fengzehui0422 - 0
lora推理中只能指定一个输入吗?有办法实现batch_size的推理吗
#159 opened by HuStanding - 2
- 2
求助:chatglm2 lora训练error:RuntimeError: Expected is_sm80 to be true, but got false.
#152 opened by thirttyyy - 5
- 2
两张4090单机多卡跑,咋感觉越跑越慢了,比单卡慢
#155 opened by renllll - 0
实时微调可以通过加入传统RL实现吗
#157 opened by LIzhiqian-cassie - 0
请问有部署或者运行的文档吗?在哪里可以看?
#156 opened by qwexr - 11
- 0
- 2
运行Chatglm6b_ModelParallel代码时候,模型是下载 huggingface上的THUDM/chatglm-6bcommit为d2bbc82a2出错
#130 opened by Ardang666 - 4
ChatGLM2 lora finetuning 加载 lora 参数:RuntimeError: Expected 4-dimensional input for 4-dimensional weight [3072, 32, 1, 1], but got 3-dimensional input of size [1, 64, 4096] instead
#150 opened by yilong2001 - 0
chatGLMv2-6b p-tuning 和 LoRA数据预处理的方法是一样的吗 ?
#149 opened by sxl1993 - 0
博主您好
#148 opened by zhengqianmaifang - 4
有大佬遇到使用这个Lora训练,loss 不收敛的情况嘛
#146 opened by DuBaiSheng - 2
chatGLMv2-6b lora模型并行,代码中使用了几张卡?
#147 opened by sxl1993 - 6
请教一下大佬,chatglm2全量微调之后的模型,还能再被Lora微调吗?
#145 opened by lianglinyi - 1
bloom 也是 casulmModel 体系下的,是否可以用cpu加速推理
#133 opened by xx-zhang - 4
- 0
- 1
chatglm2-6b在8bit量化后使用get_peft_model会报错
#142 opened by wxz2002 - 7
chatglm6b_v2 单机多卡训练会卡死
#138 opened by zoepo - 4
- 2
【Chatglm6b_ModelParallel问题报错】
#141 opened by oier991215 - 1
- 2
可以只取chaglm 输入的向量表示吗?
#136 opened by AlanTubring - 2
是否有Bloom的Lora微调代码?
#134 opened by acadaiaca - 4
AttributeError: 'ChatGLMForConditionalGeneration' object has no attribute 'enable_input_require_grads'
#135 opened by zoepo - 10
请问满足模型并行化的条件是什么
#132 opened by taofennanhai - 2
Lora训练一段时间后出现OOM报错
#125 opened by 976311200 - 2
CUDA Error
#120 opened by yuntong613 - 4
Chatglm6b_ModelParalle子项尝试失败,遇到模型加载问题
#124 opened by shaoqing404 - 3
train model all error
#131 opened by yxk9810 - 1
chinese_bloom 支持上下文对话吗
#128 opened by gebilaoman - 0
torch
#129 opened by yangliuIOC - 1
大佬
#122 opened by yangliuIOC - 1
裁剪词表
#126 opened by yangliuIOC - 1
chinese bloom的默认padding side为什么改成了right
#127 opened by DZ9 - 4
chinese_bloom通过ds训练报错:RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm)
#123 opened by shaoqing404 - 0
模型并行启动方式
#121 opened by kevinuserdd