Issues
- 4
7B 模型合并问题
#41 opened by Garmin-Qian - 1
Question about experimental settings
#39 opened by songjhPKU - 4
Environment issues
#40 opened by harmlessSR - 3
WizardMath-7b和WizardLM-7b模型合并问题
#18 opened by sasgkhgw - 2
python enviornment
#35 opened by 1998v7 - 3
- 3
about Table1
#37 opened by LianShuQuan - 6
alpaca eval 评测问题
#34 opened by qq31415926 - 2
Why same values of evaluation metrics when run>=1
#36 opened by zwxu064 - 1
Any solution to merge 3B models?
#32 opened by KADCA21 - 1
Are the classification heads merged?
#31 opened by SpeeeedLee - 2
Alpaca_eval evaluation error
#33 opened by wang-kee - 19
如何合并多个模型?
#29 opened by guanfaqian - 6
是否可以Drop 100%???
#30 opened by Guozhenyuan - 1
Script to reproduce all experiment in paper
#28 opened by pkulium - 3
Seeking mirrors of WizardLM models
#27 opened by Sophia118Guo - 2
- 10
- 1
Couldn't find a dataset script at /home/dell7960/PycharmProjects/DARE/MergeLM/glue/glue.py or any data file in the same directory.
#23 opened by Synnai - 2
- 2
模型支持
#22 opened by clclclaiggg - 4
如何对齐论文中 LM&Math&Code融合的指标
#21 opened by RDXSun - 2
- 1
PEFT integration of DARE method
#19 opened by pacman100 - 4
WizardCoder-Python-7B模型精度问题
#17 opened by llwx593 - 1
使用ties和magnitude方法遇到了一些问题
#16 opened by HypherX - 1
你好,关于融合code模型的选择问题
#15 opened by anon6662 - 2
The question about encoder-based model merge.
#13 opened by bestfleer - 4
为什么融合wizard-lm和math后模型生成乱码
#14 opened by anon6662 - 2
WizardMath model embedding层维度问题。
#12 opened by sasgkhgw - 2
- 2
看起来参数是随机丢弃了
#10 opened by ReactiveCJ - 2
- 2
What is the LICENSE type of this repo?
#8 opened by ramkumarkoppu - 9
Is there merged model available for download?
#6 opened by kexul - 4
- 6
Llama model support
#5 opened by paulcx - 2
- 2
- 2
Is the environment right? vllm 0.11.4
#1 opened by tianyumyum