- News and Updates
- Features
- Plots
- Datasets and Models
- Getting Started
- Usage
- Contact
- License
- Response Examples
- Community
- Reference
- [15/10/2024] Our report is on Arxiv!
- [12/10/2024] OpenR has been released! 🚀
- ✅ Process-supervision Data Generation
- ✅ Online Policy Training
- ✅ Generative and Discriminative PRM Training
- ✅ Multiple Search Strategies
- ✅ Test-time Computation and Scaling Law
MATH-APS (Our Dataset)
MATH-psa (Our Process Reward Model)
conda create -n open_reasoner python=3.10
conda activate open_reasoner
pip install -r requirements.txt
pip3 install "fschat[model_worker,webui]"
pip install -U pydantic
cd envs/MATH/latex2sympy
pip install -e .
cd -
Before running the project, please ensure that all required base models are downloaded. The models used in this project include:
Qwen2.5-Math-1.5B-Instruct
,Qwen2.5-Math-7B-Instruct
Qwen2.5-Math-RM-72B
peiyi9979/mistral-7b-sft
peiyi9979/math-shepherd-mistral-7b-prm
To download these models, please refer to the Hugging Face model downloading tutorial for step-by-step guidance on downloading models from the Hugging Face Hub.
Please make sure that all models are saved in their directories according to the project setup before proceeding.
Before running inference, please modify the following variables in the scripts under reason/llm_service/
to set the appropriate base models for your usage:
$MODEL_BASE
: Set this to the directory where your models are stored.$POLICY_MODEL_NAME
: Set this to the name of the policy model you wish to use.$VALUE_MODEL_NAME
: Set this to the name of the value model you wish to use.$NUM_LM_WORKER
: Set this to the number of language model (LM) workers to start.$NUM_RM_WORKER
: Set this to the number of reward model (RM) workers to start.
Then it prepares and runs inference using different techniques.
For example, to start the LM and RM services for the Math Shepherd model, run the following command:
sh reason/llm_service/create_service_math_shepherd.sh
--LM
, --RM
) in the script aligns with the variables ($POLICY_MODEL_NAME
, $VALUE_MODEL_NAME
) in the pending worker!
export PYTHONPATH=$(pwd)
sh scripts/eval/cot_greedy.sh
# Method: cot. Average result: ({'majority_vote': 0.734, 'total_completion_tokens': 559.13},)
sh scripts/eval/cot_rerank.sh
# Method: best_of_n. Average result: ({'majority_vote': 0.782,
# 'prm_min_max': 0.772,
# 'prm_min_vote': 0.792,
# 'prm_last_max': 0.776,
# 'prm_last_vote': 0.792,
# 'total_completion_tokens': 4431.268},)
sh scripts/eval/beam_search.sh
# Method: beam_search. Average result: ({'majority_vote': 0.74, 'total_completion_tokens': 2350.492},)
$dataset_path
, $model_name_or_path
and $prm_name_or_path
in train/mat/scripts/train_llm.sh
.
cd train/mat/scripts
bash train_llm.sh
cd prm/code
\\ single gpu
python finetune_qwen_single_gpu.py --model_path $YOUR_MODEL_PATH \
--train_data_path $TRAIN_DATA_PATH \
--test_data_path $TEST_DATA_PATH
\\ multi gpu
torchrun --nproc_per_node=2 finetune_qwen.py --model_path $YOUR_MODEL_PATH \
--data_path $YOUR_DATA_FOLDER_PATH \
--datasets both \
-
Add More Comprehensive Evaluations on RL Training and Search Strategies
-
Scaling the Prove-Verifier Model Size
-
Support Self-improvement Training
The OpenR community is maintained by:
- Openreasoner Team (openreasoner@gmail.com)
OpenR is released under the MIT License.
If you do find our resources helpful, please cite our paper:
@article{openr2024,
title = {OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models},
url = {https://arxiv.org/pdf/2410.09671},
author = {Jun Wang, Meng Fang, Ziyu Wan, Muning Wen, Jiachen Zhu, Anjie Liu,
Ziqin Gong, Yan Song, Lei Chen, Lionel M. Ni, Linyi Yang, Ying Wen, Weinan Zhang},
year = {2024}
}
WeChat:
[1] Alphazero-like tree-search can guide large language model decoding and training.
[2] Reasoning with language model is planning with world model.
[3] Scaling LLM test-time compute optimally can be more effective than scaling model parameters
[4] Think before you speak: Training language models with pause tokens
[1] Training verifiers to solve math word problems
[2] Solving math word problems with process-and outcome-based feedback
[4] Making large language models better reasoners with step-aware verifier
[5] Ovm, outcome-supervised value models for planning in mathematical reasoning
[6] Generative verifiers: Reward modeling as next-token prediction
[1] Star: Bootstrapping reasoning with reasoning
[2] Quiet-star: Language models can teach themselves to think before speaking
[3] Improve mathematical reasoning in language models by automated process supervision
[4] Shepherd: A critic for language model generation
[5] Math-shepherd: Verify and reinforce llms step-by-step without human annotations