Auditing Large Language Models made easy!
Language models enable companies to build and launch innovative applications to improve productivity and increase customer satisfaction. However, it’s been known that LLMs can hallucinate, generate adversarial responses that can harm users, and even expose private information that they were trained on when prompted or unprompted. It's more critical than ever for ML and software application teams to minimize these risks and weaknesses before launching LLMs and NLP models. As a result, it’s important for you to include a process to audit language models thoroughly before production. The Fiddler Auditor enables you to test LLMs and NLP models, identify weaknesses in the models, and mitigate potential adversarial outcomes before deploying them to production.
Fiddler Auditor supports
- Red-teaming LLMs for your use-case with prompt perturbation
- Integration with LangChain
- Custom evaluation metrics
- Generative and Discriminative NLP models
- Comparison of LLMs
Auditor is available on PyPI and we test on Python 3.8 and above. We recommend creating a virtual python environment and installing using the following command
pip install fiddler-auditor
You can install from source after cloning this repo using the following command
pip install .
- Fiddler Auditor Quickstart
- Evaluate LLMs with custom metrics
- Prompt injection attack with custom transformation
We are continuously updating this library to support language models as they evolve.
- Contributions in the form of suggestions and PRs to Fiddler Auditor are welcome!
- If you encounter a bug, please feel free to raise issues in this repository.
For step-by-step instructions follow the Contribution Guide.
- For questions and support, join the Fiddler Community
- Discover the latest guides, videos, and research with the Fiddler Resources Library
- Stay informed by following us on Twitter
- Subscribe to our monthly newsletter
- Request a demo