/hh-rlhf

Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"

MIT LicenseMIT

Stargazers

No one’s star this repository yet.