/openr

OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models

Primary LanguagePythonMIT LicenseMIT


Logo

OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models

Paper · Tutorial · Code · Docs · Data · Model · Issue


GitHub contributors arXiv GitHub License GitHub Issues or Pull Requests GitHub forks GitHub Repo stars HuggingFace Dataset X WeChat

Table of Contents
  1. News and Updates
  2. Features
  3. Plots
  4. Datasets and Models
  5. Getting Started
  6. Usage
  7. Contact
  8. License
  9. Response Examples
  10. Community
  11. Reference

News and Updates

  • [15/10/2024] Our report is on Arxiv!
  • [12/10/2024] OpenR has been released! 🚀

Features

Description

  • ✅ Process-supervision Data Generation
  • ✅ Online Policy Training
  • ✅ Generative and Discriminative PRM Training
  • ✅ Multiple Search Strategies
  • ✅ Test-time Computation and Scaling Law

Plots

PRM_Results Inference_Results

Provided Datasets and Models

MATH-APS (Our Dataset)

MATH-psa (Our Process Reward Model)

Getting Started

Installation

conda create -n open_reasoner python=3.10
conda activate open_reasoner
pip install -r requirements.txt
pip3 install  "fschat[model_worker,webui]"
pip install -U pydantic
cd envs/MATH/latex2sympy
pip install -e .
cd -

Download Base Models

Before running the project, please ensure that all required base models are downloaded. The models used in this project include:

  • Qwen2.5-Math-1.5B-Instruct, Qwen2.5-Math-7B-Instruct
  • Qwen2.5-Math-RM-72B
  • peiyi9979/mistral-7b-sft
  • peiyi9979/math-shepherd-mistral-7b-prm

To download these models, please refer to the Hugging Face model downloading tutorial for step-by-step guidance on downloading models from the Hugging Face Hub.

Please make sure that all models are saved in their directories according to the project setup before proceeding.

Quickstart

Before running inference, please modify the following variables in the scripts under reason/llm_service/ to set the appropriate base models for your usage:

  • $MODEL_BASE: Set this to the directory where your models are stored.
  • $POLICY_MODEL_NAME: Set this to the name of the policy model you wish to use.
  • $VALUE_MODEL_NAME: Set this to the name of the value model you wish to use.
  • $NUM_LM_WORKER: Set this to the number of language model (LM) workers to start.
  • $NUM_RM_WORKER: Set this to the number of reward model (RM) workers to start.

Then it prepares and runs inference using different techniques.

Start LM & RM Services

For example, to start the LM and RM services for the Math Shepherd model, run the following command:

sh reason/llm_service/create_service_math_shepherd.sh

Usage

Run Inference

⚠️ Make sure the input (--LM, --RM) in the script aligns with the variables ($POLICY_MODEL_NAME, $VALUE_MODEL_NAME) in the pending worker!

export PYTHONPATH=$(pwd)
sh scripts/eval/cot_greedy.sh

# Method: cot. Average result: ({'majority_vote': 0.734, 'total_completion_tokens': 559.13},)

sh scripts/eval/cot_rerank.sh

# Method: best_of_n. Average result: ({'majority_vote': 0.782, 
#                                       'prm_min_max': 0.772, 
#                                       'prm_min_vote': 0.792, 
#                                       'prm_last_max': 0.776, 
#                                       'prm_last_vote': 0.792, 
#                                       'total_completion_tokens': 4431.268},)

sh scripts/eval/beam_search.sh

# Method: beam_search. Average result: ({'majority_vote': 0.74, 'total_completion_tokens': 2350.492},)

Run Training

⚠️ Before training, please modify the $dataset_path, $model_name_or_path and $prm_name_or_path in train/mat/scripts/train_llm.sh.

cd train/mat/scripts
bash train_llm.sh

Run PRM Learning

cd prm/code

\\ single gpu
python finetune_qwen_single_gpu.py --model_path $YOUR_MODEL_PATH \
                                   --train_data_path $TRAIN_DATA_PATH \
                                   --test_data_path $TEST_DATA_PATH


\\ multi gpu
torchrun --nproc_per_node=2 finetune_qwen.py --model_path $YOUR_MODEL_PATH \
                                             --data_path $YOUR_DATA_FOLDER_PATH \
                                             --datasets both \

Future Plan

  • Add More Comprehensive Evaluations on RL Training and Search Strategies

  • Scaling the Prove-Verifier Model Size

  • Support Self-improvement Training

Contact

The OpenR community is maintained by:

License

OpenR is released under the MIT License.

Citation

If you do find our resources helpful, please cite our paper:

@article{openr2024,
  title = {OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models},
  url = {https://arxiv.org/pdf/2410.09671},
  author = {Jun Wang, Meng Fang, Ziyu Wan, Muning Wen, Jiachen Zhu, Anjie Liu, 
              Ziqin Gong, Yan Song, Lei Chen, Lionel M. Ni, Linyi Yang, Ying Wen, Weinan Zhang},
  year = {2024}
}

Response Examples

Comparing PRM, Math-psa (Ours) V.S. Math-Shepherd

QA 1 QA 2

Justifing RL Training

QA 3 QA 4

Exploring Test-time Computation

QA 5 QA 6 QA 7

Community

WeChat:

Reference

Inference-time Computing

[1] Alphazero-like tree-search can guide large language model decoding and training.

[2] Reasoning with language model is planning with world model.

[3] Scaling LLM test-time compute optimally can be more effective than scaling model parameters

[4] Think before you speak: Training language models with pause tokens

From Outcome Supervision to Process Supervision

[1] Training verifiers to solve math word problems

[2] Solving math word problems with process-and outcome-based feedback

[3] Let’s verify step by step

[4] Making large language models better reasoners with step-aware verifier

[5] Ovm, outcome-supervised value models for planning in mathematical reasoning

[6] Generative verifiers: Reward modeling as next-token prediction

Data Acquisition

[1] Star: Bootstrapping reasoning with reasoning

[2] Quiet-star: Language models can teach themselves to think before speaking

[3] Improve mathematical reasoning in language models by automated process supervision

[4] Shepherd: A critic for language model generation

[5] Math-shepherd: Verify and reinforce llms step-by-step without human annotations