open-compass/MixtralKit
A toolkit for inference and evaluation of 'mixtral-8x7b-32kseqlen' from Mistral AI
PythonApache-2.0
Issues
- 3
eval_mixtral怎么来的
#22 opened by Anorid - 0
eval_mixtral怎么来的
#26 opened by jerryli1981 - 0
Mixtral went stupid/lazy and disobeyed the instructions when it couldn't print all outputs at one time
#25 opened by zhuyinzhuyin - 0
- 1
你好,能否直接使用huggingface拆分的mixtral-8x7b权重进行评估呢
#21 opened by shed-e - 0
Where to find the playground folder?
#23 opened by YJHMITWEB - 3
the evaluation error for the eval_mixtral.py
#19 opened by runzeer - 1
MMLU Performance?
#13 opened by bdytx5 - 2
Mistral 8x7B 32k 是一个预训练模型还是SFT模型?
#14 opened by Ezra-Yu - 3
几点疑问或建议
#9 opened by shuaidaming - 0
why the performance for the GSM8K and MATH lower than the original mixtral blog?
#12 opened by runzeer - 2
support training (full or peft lora) scripts
#5 opened by matrixssy - 0
- 2
- 4
no checkpoint files found in ./ckpts
#3 opened by plutoda588 - 1
support alternative parallelism
#2 opened by 152334H