[Bug/Assistance] - Reproducing Results on Alfworld (HH) (vs. ReAct paper)
Opened this issue · 4 comments
Bug / Assistance Description
The results that are reported in the HH column are very different to the ReAct paper. In particular, ReAct reports
To Reproduce
See screenshots below. Your results in HH column indicate 16% success for text-davinci-002 or gpt-3.5-turbo. However, the reults using text-davinci-002 on ReAct indicate 78% (second screenshot). This is a significant difference.
Screenshots or Terminal Copy&Paste
![ReAct Paper](https://private-user-images.githubusercontent.com/9797804/311442994-23577b60-216a-429a-9dcb-89d682da1ef3.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTkxMTAzNTQsIm5iZiI6MTcxOTExMDA1NCwicGF0aCI6Ii85Nzk3ODA0LzMxMTQ0Mjk5NC0yMzU3N2I2MC0yMTZhLTQyOWEtOWRjYi04OWQ2ODJkYTFlZjMucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI0MDYyMyUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNDA2MjNUMDIzNDE0WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9OTQ5MTk4Nzk3MTIwMTI5ZTFjNjljOGIyNzhjYmFiOTI1MjY5MGM0ZGIyNWUyOTIzOTk4NDQxZmJhNDI4MGRhZiZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QmYWN0b3JfaWQ9MCZrZXlfaWQ9MCZyZXBvX2lkPTAifQ.YpAzzcGqAi5EtfL77zdD9nSyjKrv9asXZh1Gt-sLAtM)
Concrete Questions / Actions:
Please tell us:
- How your evaluation for Alfworld (HH) differs from ReAct?
- Which exact model you used?
- Which prompts you used (1-shot, 2-shot), and are they the same as from the ReAct paper?
- Why are the results so different?
Please read the paper carefully. You can find all the prompt in appendix or code. The results are different because 1. we are not using the same prompt. 2. we are not using exactly the same envrionment.
Thanks for coming back @zhc7.
-
Thanks for clarifying, yes in appending G.2 a prompt example can be seen, which I guess corresponds to either:
a. https://github.com/THUDM/AgentBench/blob/main/src/server/tasks/alfworld/prompts/alfworld_multiturn_react.json
b. https://github.com/THUDM/AgentBench/blob/main/src/server/tasks/alfworld/prompts/alfworld_multiturn_plan_first.json -
Can you elaborate how the environment is not exactly the same? [Do you use a different version of alfworld, etc.?]
The reason for asking about this question is to understand whether you were able to get close to the results reported in ReAct and what the exact difference might be, as the results of ReAct seem quite impossible to reproduce.
Hi, @ai-nikolai sorry for the late reply, we've been quite busy lately. To answer your question, I believe the main difference is the prompting technique. We weren't aiming to reproduce the ReAct's result, but to design a prompt and a evaluation process that is relatively fair to all the models. The prompt we used is listed in paper Appendix G. The evaluation process is located at
AgentBench/src/server/tasks/alfworld/task.py
Line 105 in 2f3c343
Can you elaborate how the environment is not exactly the same? [Do you use a different version of alfworld, etc.?]
The main differences are about adapting the alfworld to the framework and set some limitations and rules to avoid prolonged evaluation.
To sum up, you may have to do some more investigations on this problem.