follow instructions in https://github.com/XplainMind/LLMindCraft
data/train folder contains 3-round conversation data.
columns:
thinking_trap: client's thinkning trap
thought: client's thought from the original dataset
patient_round1: clients express the current feelings
doctor_round1: ask the client to seperate situation and thought
patient_round2: answer what's the situation and thought
doctor_round2: ask the client to brainstorm
patient_round3: brainstorming
polished_doctor_round3: AI's last reply
dat/test folder contains the client's side conversation (generated by ChatGPT) in round1 and round3, i.e. expressing their thoughts and brainstorming.
result folder contains ChatGLM, Llama2-7b-chat, and HealMe performances on test data.
evaluate folder contains evaluation code of ChatGLM and Llama2-7b-chat
to train your own model with conversation data, we use the finetune framework in https://github.com/XplainMind/LLMindCraft