compass-ctf-team/prompt_injection_research
This research proposes defense strategies against prompt injection in large language models to improve their robustness and security against unwanted outputs.
PythonMIT
Watchers
No one’s watching this repository yet.