/pytector

A Python package designed to detect prompt injection in text inputs utilizing state-of-the-art machine learning models from Hugging Face. The main focus is on ease of use, enabling developers to integrate security features into their applications with minimal effort.

Primary LanguagePythonMIT LicenseMIT

Pytector

As presented at the Oxford Workshop on Safety of AI Systems including Demo Sessions and Tutorials

Pytector Logo

Build Tests Python Version Issues Pull Requests

Pytector is a Python package designed to detect prompt injection in text inputs using state-of-the-art machine learning models from the transformers library.

Disclaimer

Pytector is still a prototype and cannot provide 100% protection against prompt injection attacks!

Features

  • Detect prompt injections with pre-trained models.
  • Support for multiple models including DeBERTa, DistilBERT, and ONNX versions.
  • Easy-to-use interface with customizable threshold settings.

Installation

pip install pytector

Install Pytector directly from the source code:

git clone https://github.com/MaxMLang/pytector.git
cd pytector
pip install .

Usage

To use Pytector, you can import the PromptInjectionDetector class and create an instance with a pre-defined model or a custom model URL.

import pytector

# Initialize the detector with a pre-defined model
detector = pytector.PromptInjectionDetector(model_name_or_url="deberta")

# Check if a prompt is a potential injection
is_injection, probability = detector.detect_injection("Your suspicious prompt here")
print(f"Is injection: {is_injection}, Probability: {probability}")

Documentation

For full documentation, visit the docs directory.

Contributing

Contributions are welcome! Please read our Contributing Guide for details on our code of conduct, and the process for submitting pull requests.

License

This project is licensed under the MIT License - see the LICENSE file for details.