This is the repository for the "Benchmarking Soft-Prompting methods" project. This project was done as a part of the Natural Language Understanding course at NYU.
Team members: Shubham Jha (shubham.jha@nyu.edu), Sai Himal Allu (sa6782@nyu.edu), Aditya Kashilkar (ask9126@nyu.edu).
The repo contains code for the following methods:
- P-Tuning-v2
- PrefixTuning
- Prompt-Tuning
The instructions for training soft prompts using the different methods can be found inside the respective directories.