This is the submission for the Project for ML in Cybersecurity at NYU Tandon.
Developers: Karan Sheth, Mujahid Ali Quidwai, Utkarsh Shekhar
The objective of this project is to analyse the impact of Prompt Injection on Large Language Models like GPT-3. The project further attempts to automate the process of generating promtpt by fine-tuning a T5-Base model for creating false data
- OpenAI playground account to run the injection queries
- GPU that can load T5-Base in memory
- Load the pre-trained results available for the model.
- run test function with your custom imput
- Use the prediction to generate the prompt from the parser and run it on GPT-3
Sample of injections attacks on GPT-3