xlang-ai/BRIGHT

Chain-of-Though baseline code

RulinShao opened this issue · 2 comments

Thanks for releasing this awesome work!!! I wonder if you plan to release the code that you used to run the CoT+X baseline in your paper? I would also appreciate it if you could share the generated reasoning steps if it's convenient. Thanks a lot!!!

Thanks a lot for your interest!

All the generated reasoning steps were uploaded to the huggingface: https://huggingface.co/datasets/xlangai/BRIGHT. The subset ending with "_reason" is the version with queries replaced by LLM reasoning steps.

To evaluate models with CoT steps of gpt4, you can run the following:

python run.py --task {task} --model {model} --reasoning gpt4

Feel free to let me know if there is anything I can help!

Thank you so much for the timely response! This perfectly addressed my question, closing the issue ;)