Issues
- 0
更大模型进行编辑会报OOM问题
#30 opened by echo-valor - 0
尝试了下baichuan13b,似乎效果不是很好
#29 opened by tianmala - 3
- 1
Why modifying down_proj in llama?
#5 opened by Wenyueh - 7
- 3
- 2
- 0
训练方式和LLaMA-Efficient-Tuning-main区别
#25 opened by yiGKTOL - 3
请问编辑后的模型储存在哪里了
#24 opened by mumuyeye - 2
- 0
RuntimeError: computing v Vector
#22 opened by MaximIfergan - 1
qwen support
#21 opened by zlh1992 - 5
Would you consider supporting ChatGLM2-6B?
#10 opened by zhuam - 1
- 1
- 0
LLaMA-2-7b-chat Editing failed
#18 opened by MIracleyin - 0
- 3
编辑完的baichuan-13b该如何保存
#11 opened by sichehu - 2
Is there any way to apply this interesting algorithm to the chatGLM-6B or chatGLM2-6B models?
#3 opened by rayoa - 2
单张80G卡编辑7B模型 报显存不足 想请教一下如何单机多卡去run
#16 opened by zhangfan-algo - 2
这种编辑的方式有副作用吗?比如模型遗忘问题
#15 opened by runvyang - 1
A little mistake in HyperParams
#13 opened by canglincuizhu - 2
Error occurs when editing Baichuan-13B
#7 opened by hiyouga - 1
- 0
编辑baichuan13b的时候报错NotImplementedError
#8 opened by sichehu - 1
请教如何配置config?
#6 opened by songbaiTalk - 0
初步看起来,这个的原理是,捕捉到两个data在模型内部的参数diff?
#4 opened by guotong1988