GAIR-NLP/abel

Beyond SFT: Math-oriented Pre-training

EthanC111 opened this issue · 0 comments

We definitely need to continue pretraining our base model on a large number of math-related corpus.