/promptbench

A robustness evaluation framework for large language models on adversarial prompts

Primary LanguagePythonMIT LicenseMIT

No issues in this repository yet.