/Cue-CoT

Cue-CoT: Chain-of-thought Prompting for Responding to In-depth Dialogue Questions with LLMs [EMNLP 2023 Findings]

Primary LanguagePython

CUHK_KFLAB

Cue-CoT: Chain-of-thought Prompting for Responding to In-depth Dialogue Questions with LLMs [EMNLP 2023 Findings]

Code License Data License

This is the repo for the CUHK, HIT Dialogue CoT project, which aims to build and share our evaluation benchmarks and our methods to generate personalized, empathetic, and compassionate responses. The repo contains:

Note:

  • We will publish cot-tuning model soon. Please stay tuned.
  • Usage and License Notices: We use Open AI API to generate some part of our evaluation data. The datasets should not be used outside of research purposes.
  • The part of constructed datasets are based on several existing research works. Please cite them and following their used policy if you used corresponding evaluation data.
  • We would not release our evaluation data based on PsyQA, adhering to the private policy of the original paper. However, if you have access to the PsyQA, the evaluation data could be automatically constructed by our provided scripts here.

Method

image

Global Positions of Current LLMs

image

Demo

Step1: put your OpenAI Key in web_demo/config.json
Step2: directly run the command: python web_demo/web.py

Cases

Pls use browser to open web_demo/index.html

Acknowledgement

We want to thank all realted open-source projects, especially but not limited to the following:
https://github.com/LianjiaTech/BELLE/
https://github.com/THUDM/ChatGLM-6B
https://github.com/kaixindelele/ChatPaper and
https://github.com/lm-sys/FastChat/ 
...

Citations

@misc{wang2023cuecot,
      title={Cue-CoT: Chain-of-thought Prompting for Responding to In-depth Dialogue Questions with LLMs}, 
      author={Hongru Wang and Rui Wang and Fei Mi and Yang Deng and Zezhong Wang and Bin Liang and Ruifeng Xu and Kam-Fai Wong},
      year={2023},
      eprint={2305.11792},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}