This toolkit accompanies the paper:
Developed with ❤️️ by Songbo Hu, Xiaobin Wang, Zhangdie Yuan, Anna Korhonen, and Ivan Vulić.
- [2024-03-20] DIALIGHT has been accepted at NAACL-HLT System Demonstrations 2024.
- [2024-01-15] We release our toolkit together with our introductory paper. 🎉
This repository is structured into three main directories:
deployment
: Contains scripts and configuration files essential for the one-click deployment of our human evaluation tool.human_eval_tool
: Contains the source code for our human evaluation tool. This tool is designed to facilitate human evaluation experiments of dialogue systems.tod_system
: Contains the source code for developing and evaluating multilingual task-oriented dialogue systems.
This repository provides detailed instructions in each directory, which are designed to guide you through the various aspects of our toolkit. Please refer to these to get started with our toolkit.
-
For ToD System Development: If your focus is on developing dialogue systems, we have a dedicated instruction to run our baseline systems. Please see tod_system/README.md.
-
For Human Evaluation Tool Customisation: Should you wish to tailor the human evaluation tool to better suit your specific needs, instructions are available in human_eval_tool/README.md.
-
For One-Click Deployment: For users interested in enjoying our one-click deployment for the human evaluation setup, deployment instructions can be found in deployment/README.md.
We welcome contributions and feedback on this project. If you have suggestions or want to contribute, please feel free to fork the repository and submit pull requests.
Encountered an issue? Please submit it through the GitHub issue tracker.
For direct inquiries or specific concerns, you can also contact us via email: sh2091@cam.ac.uk.
We extend our gratitude to the open-source community for their contributions to this project. The development of our source code has been significantly aided by leveraging LLMs such as GPT-4.0.