Issues
- 4
- 0
OOM error
#34 opened by Davido111200 - 1
Transformers and Tokenizers version conflict.
#31 opened by anhledger12 - 1
- 0
Are the MetaMath Eval Result valid? e.g., What is the standard way to evaluate on MATH?
#33 opened by brando90 - 2
- 0
eval_math and eval_gsm8k
#30 opened by zhentingqi - 0
- 2
Error in run_create_backward_questions.sh
#25 opened by ustccyf - 0
Questions about MetaMATH dataset
#26 opened by caihaunqai - 2
path/to/llama-2
#24 opened by poojitharamachandra - 0
Potential error in eval_gsm8k.py
#23 opened by hbin0701 - 1
- 5
Dataset generation script
#1 opened by imoneoi - 1
the few-shot in-context learning issues.
#21 opened by runzeer - 1
Ask for adding a new baseline——MuggleMATH in the Comparing MetaMath with the LLM models
#9 opened by ChengpengLi1003 - 1
- 0
How many tokens did MetaMath train on?
#14 opened by brando90 - 1
- 1
License inconsistency
#8 opened by muelletm - 1
- 2
will there be ablation studies?
#13 opened by yucc-leon - 1
Modified parameter name that is different from README.
#10 opened by xukp20 - 4
eval_math script outputs 0 accuracy
#4 opened by zhangir-azerbayev - 1
RuntimeError: probability tensor contains either `inf`, `nan` or element < 0 when setting max_new_tokens
#7 opened by AegeanYan - 1
- 1
training_scripts
#5 opened by choco9966 - 10
Dataset
#2 opened by zhangir-azerbayev