Cue-CoT: Chain-of-thought Prompting for Responding to In-depth Dialogue Questions with LLMs [EMNLP 2023 Findings]
This is the repo for the CUHK, HIT Dialogue CoT project, which aims to build and share our evaluation benchmarks and our methods to generate personalized, empathetic, and compassionate responses. The repo contains:
- The 3.0K data (500 for each dataset) used for evaluating the model publicly and we save another half of the data for private evaluation.
- The demo for chatting with the model incorporating our capability.
- The case study of different datasets.
- The code for generating the data, released soon.
- The code for evaluating the model, released soon.
Note:
- We will publish cot-tuning model soon. Please stay tuned.
- Usage and License Notices: We use Open AI API to generate some part of our evaluation data. The datasets should not be used outside of research purposes.
- The part of constructed datasets are based on several existing research works. Please cite them and following their used policy if you used corresponding evaluation data.
- We would not release our evaluation data based on PsyQA, adhering to the private policy of the original paper. However, if you have access to the PsyQA, the evaluation data could be automatically constructed by our provided scripts here.
Step1: put your OpenAI Key in web_demo/config.json
Step2: directly run the command: python web_demo/web.py
Pls use browser to open web_demo/index.html
We want to thank all realted open-source projects, especially but not limited to the following:
https://github.com/LianjiaTech/BELLE/
https://github.com/THUDM/ChatGLM-6B
https://github.com/kaixindelele/ChatPaper and
https://github.com/lm-sys/FastChat/
...
@misc{wang2023cuecot,
title={Cue-CoT: Chain-of-thought Prompting for Responding to In-depth Dialogue Questions with LLMs},
author={Hongru Wang and Rui Wang and Fei Mi and Yang Deng and Zezhong Wang and Bin Liang and Ruifeng Xu and Kam-Fai Wong},
year={2023},
eprint={2305.11792},
archivePrefix={arXiv},
primaryClass={cs.CL}
}