human-feedback
There are 18 repositories under human-feedback topic.
lucidrains/PaLM-rlhf-pytorch
Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
opendilab/awesome-RLHF
A curated list of reinforcement learning with human feedback resources (continually updated)
conceptofmind/LaMDA-rlhf-pytorch
Open-source pre-training implementation of Google's LaMDA in PyTorch. Adding RLHF similar to ChatGPT.
huggingface/data-is-better-together
Let's build better datasets, together!
yk7333/d3po
[CVPR 2024] Code for the paper "Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model"
xrsrke/instructGOOSE
Implementation of Reinforcement Learning from Human Feedback (RLHF)
wxjiao/ParroT
The ParroT framework to enhance and regulate the Translation Abilities during Chat based on open-sourced LLMs (e.g., LLaMA-7b, Bloomz-7b1-mt) and human written translation and evaluation data.
trubrics/trubrics-sdk
Product analytics for AI Assistants
PKU-Alignment/beavertails
BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).
davidberenstein1957/dataset-viber
Dataset Viber is your chill repo for data collection, annotation and vibe checks.
HannahKirk/prism-alignment
The Prism Alignment Project
ZhenbangDu/Reliable_AD
[ECCV2024] Towards Reliable Advertising Image Generation Using Human Feedback
ZiyiZhang27/tdpo
[ICML 2024] Code for the paper "Confronting Reward Overoptimization for Diffusion Models: A Perspective of Inductive and Primacy Biases"
gao-g/prelude
Code for the paper "Aligning LLM Agents by Learning Latent Preference from User Edits".
AlaaLab/pathologist-in-the-loop
[ NeurIPS 2023 ] Official Codebase for "Aligning Synthetic Medical Images with Clinical Knowledge using Human Feedback"
victor-iyi/rlhf-trl
Reinforcement Learning from Human Feedback with 🤗 TRL
wang8740/MAP
Documentation at
JacqueWill/SEO_HIF_JS
Search Engine Optimization using Human Implicit Feedback