beyondguo/LLM-Tuning
Tuning LLMs with no tears💦; Sample Design Engineering (SDE) for more efficient downstream-tuning.
HTML
Issues
- 3
chatglm2报错:ValueError: weight is on the meta device, we need a `value` to put in on 0
#15 opened by angel1288 - 1
- 0
Why epoch in log is different from progress
#58 opened by jimmy-walker - 2
有群吗,拉个群把
#9 opened by dragononly - 19
ChatGLM2按照readme教程微调了,但是没有效果!!!
#14 opened by HelixPark - 8
多卡训练感觉不是并发的?
#18 opened by shenmadouyaowen - 0
使用ChatGLM2-6B分词报错
#57 opened by kyle-hy - 0
code lamma微调脚本可以使用baichuan2的吗
#56 opened by xhaoss - 2
请教一个问题,chatglm2在用lora微调时,不添加attention mask也可以么?
#21 opened by annw0922 - 0
model.hf_device_map 不存在如何解决呢
#55 opened by LivinLuo1993 - 1
- 1
- 3
- 0
where is "my_templates" module
#53 opened by andyzhu - 2
这个微调代码不能直接用来baichuan-13B模型的微调?在13B一直报错
#38 opened by DaiJitao - 0
PPO training CUDA out of memory
#52 opened by 14H034160212 - 0
chatglt-6b2 lora微调使用int4精度报错
#50 opened by hehuomu - 0
baichuan-13b reward model训练
#48 opened by endlesstalking - 3
- 2
baichuan_lora_tuning运行,为什么一直卡在8%呢?
#44 opened by yanduoduan - 1
为什么训练阶段的显存一直在往上涨?一会就 OOM 了
#45 opened by Amazing-J - 1
加载数据集报错
#47 opened by endlesstalking - 1
chatglm2不支持SequenceClassification请问如何解决。。
#46 opened by jaycehw - 4
- 3
TORCH_USE_CUDA_DSA
#34 opened by Mrjude - 2
关于全量和LoRA的问题
#28 opened by llmrainer - 1
如何在离线环境生成 tokenized_data ?
#42 opened by seek4self - 1
rulai_enhance.json 数据开源了嘛大佬
#41 opened by zlszhonglongshen - 1
使用lora微调ChatGLM2-6B 报错
#39 opened by QJShan - 2
如何评估finetune过程中模型的性能?
#13 opened by dxyzx0 - 1
lora tuning 出的权重,再加一个合并的功能?
#37 opened by litetoooooom - 3
微调的时候如何让模型记住一些特有的知识呢
#29 opened by controZheng - 5
怎么训练多轮对话呀
#7 opened by liuhuapiaoyuan - 1
6块Tesla A100 40G训练会卡住
#35 opened by cherishtttz - 1
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
#36 opened by zlszhonglongshen - 3
- 1
是否支持多轮对话的微调?
#33 opened by yaakua - 1
glm2和glm分别需要多大的显存才能微调
#32 opened by ShiXiangXiang123 - 1
cuda版本
#30 opened by 1910183821 - 1
请问可以有支持 Apple M 系列处理器 mps 的方式吗?
#27 opened by minjin - 1
微调出错如下:
#31 opened by ShiXiangXiang123 - 1
是否可以增加 预测和推断的代码?
#22 opened by RileyShe - 1
Chinese-LLaMA-Alpaca有计划安排加入吗?
#20 opened by SilenceWinter - 1
微调后输出长度不够长
#19 opened by dragononly - 1
weights 文件训练出来很小,正常嘛
#16 opened by dragononly - 0
训练后加载跑,无效果,请看看这样对吗
#17 opened by dragononly - 7
- 22
运行报错
#8 opened by shenmadouyaowen - 0
看了#8号贴,注释了两行报119
#12 opened by BoFan-tunning - 3
报错“KeyError: 'transformer.embedding'”
#10 opened by lilulu0702