/LLM-RLHF-Tuning-with-PPO-and-DPO

Comprehensive toolkit for Reinforcement Learning from Human Feedback (RLHF) training, featuring instruction fine-tuning, reward model training, and support for PPO and DPO algorithms with various configurations for the Alpaca, LLaMA, and LLaMA2 models.

Primary LanguagePython

Stargazers