/DPPO

Official implementation of "Direct Preference-based Policy Optimization without Reward Modeling" (NeurIPS 2023)

Primary LanguagePythonMIT LicenseMIT

Watchers