Please leave us a star β if you find this work helpful.
- [2025/9] π₯π₯ Seedream-4.0 are added to all π Leaderboard.
- [2025/9] π₯π₯ We release UniGenBench π Leaderboard (English Long) and π Leaderboard (Chinese Long). We will continue to update them regularly.
- [2025/9] π₯π₯ GPT-4o, Imagen-4-Ultra, Nano Banana, Seedream-3.0, Qwen-Image, FLUX-Kontext-[Max/Pro] are added to UniGenBench π Leaderboard(English) and π Leaderboard(Chinese).
- [2025/8] π₯π₯ We release Pref-GRPO and UniGenBench, and π Leaderboard(English).
- Clone this repository and navigate to the folder:
git clone https://github.com/CodeGoat24/UnifiedReward.git
cd UnifiedReward/Pref-GRPO
- Install the training package:
conda create -n PrefGRPO python=3.12
conda activate PrefGRPO
bash env_setup.sh fastvideo
cd open_clip
pip install -e .
cd ..
- Download Models
huggingface-cli download CodeGoat24/UnifiedReward-qwen-7b
huggingface-cli download CodeGoat24/UnifiedReward-Think-qwen-7b
wget https://huggingface.co/apple/DFN5B-CLIP-ViT-H-14-378/resolve/main/open_clip_pytorch_model.bin
- Install vLLM
pip install vllm==0.9.0.1 transformers==4.52.4
- Start server
bash vllm_utils/vllm_server_UnifiedReward_Think.sh
we use training prompts in UniGenBench, as shown in "./data/unigenbench_train_data.txt"
.
bash fastvideo/data_preprocess/preprocess_flux_rl_embeddings.sh
bash finetune_prefgrpo_flux.sh
we use test prompts in UniGenBench, as shown in "./data/unigenbench_test_data.csv"
.
bash inference/flux_dist_infer.sh
Then, evaluate the outputs following UniGenBench.
If you have any comments or questions, please open a new issue or feel free to contact Yibin Wang.
Our training code is based on DanceGRPO, Flow-GRPO, and FastVideo.
We also use UniGenBench for T2I model semantic consistency evaluation.
Thanks to all the contributors!
@article{Pref-GRPO&UniGenBench,
title={Pref-GRPO: Pairwise Preference Reward-based GRPO for Stable Text-to-Image Reinforcement Learning},
author={Wang, Yibin and Li, Zhimin and Zang, Yuhang and Zhou, Yujie and Bu, Jiazi and Wang, Chunyu and Lu, Qinglin, and Jin, Cheng and Wang, Jiaqi},
journal={arXiv preprint arXiv:2508.20751},
year={2025}
}