/prompt_injection_research

This research proposes defense strategies against prompt injection in large language models to improve their robustness and security against unwanted outputs.

Primary LanguagePythonMIT LicenseMIT

Watchers

No one’s watching this repository yet.