Differences in results between the code and the leaderboard
yifan123 opened this issue · 1 comments
This is excellent work, and I appreciate your open-source contribution. However, I have a couple of points that I find confusing:
-
The leaderboard (https://tatsu-lab.github.io/alpaca_eval/) shows a win rate of 95.28% for GPT4, but the win rate in the code is only 80%. This inconsistency is perplexing.
-
I apologize for not being able to locate the specific model from your paper in the leaderboard. Could you please clarify which model from the leaderboard corresponds to the one mentioned in your paper?
Thank you for your attention to these matters, and I look forward to your response.
Thanks for your interest and for raising these points!
This is excellent work, and I appreciate your open-source contribution. However, I have a couple of points that I find confusing:
- The leaderboard (https://tatsu-lab.github.io/alpaca_eval/) shows a win rate of 95.28% for GPT4, but the win rate in the code is only 80%. This inconsistency is perplexing.
While both the AlpacaEval package and the evaluation component in AlpacaFarm are based on the same set of inputs/instructions, there are big differences in how the automated pairwise preference is done. As a result, numbers in AlpacaEval are not comparable to those in AlpacaFarm.
The evaluation components of AlpacaEval and AlpacaFarm are useful for different purposes if you care primarily about methods/systems ranking.
If you care about building the next best chatbot model that does well on open-ended queries (without necessarily inventing new methods, e.g., using better data), you should use AlpacaEval. If you care about developing the next best RLHF method without collecting actual human preference annotation, you should consider AlpacaFarm.
The precise differences between AlpacaEval and the evaluation in AlpacaFarm are documented here.
- I apologize for not being able to locate the specific model from your paper in the leaderboard. Could you please clarify which model from the leaderboard corresponds to the one mentioned in your paper?
Thank you for your attention to these matters, and I look forward to your response.
If you're referring to the table below (that I copied from the README.md doc), here's the explanation
# n_draws n_total n_wins n_wins_base standard_error win_rate
# GPT4 17.00 805.00 639.00 149.00 1.38 80.43
# ChatGPT 9.00 804.00 489.00 306.00 1.71 61.38
# My fancy model 9.00 804.00 483.00 312.00 1.71 60.63
# RLHF PPO 9.00 803.00 370.00 424.00 1.75 46.64
# SFT 52k (Alpaca 7B) 16.00 804.00 320.00 468.00 1.72 40.80
# SFT 10k 19.00 802.00 278.00 505.00 1.67 35.85
# Davinci001 0.00 805.00 201.00 604.00 1.53 24.97
# LLaMA 7B 0.00 786.00 94.00 692.00 1.16
This table is a reproduction of the simulated win-rate column in Table 2 of the paper.
GPT4, ChatGPT, SFT 52k, SFT 10k, Davinci001, LLaMA 7B all maps exactly to the corresponding rows of that Table 2. RLHF PPO maps to the PPO row of that Table 2.