/DPPO

Official implementation of "Direct Preference-based Policy Optimization without Reward Modeling" (NeurIPS 2023)

Primary LanguagePythonMIT LicenseMIT

No issues in this repository yet.