A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.
Primary LanguagePythonOtherNOASSERTION
No one’s watching this repository yet.