prompt-injection
There are 95 repositories under prompt-injection topic.
CyberAlbSecOP/Awesome_GPT_Super_Prompting
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
protectai/llm-guard
The Security Toolkit for LLM Interactions
abilzerian/LLM-Prompt-Library
A playground of highly experimental prompts, Jinja2 templates & scripts for machine intelligence models from OpenAI, Anthropic, DeepSeek, Meta, Mistral, Google, xAI & others. Alex Bilzerian (2022-2025).
protectai/rebuff
LLM Prompt Injection Detector
utkusen/promptmap
a security scanner for custom LLM applications
whylabs/langkit
🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring safety & security. 🛡️ Features include text quality, relevance metrics, & sentiment analysis. 📊 A comprehensive tool for LLM observability. 👀
yunanwg/brilliant-CV
💼 another CV template for your job application, yet powered by Typst and more
zacfrulloni/Prompt-Engineering-Holy-Grail
Land your first client with vibe coding: skool.com/lovable-vibe-coding/about
tldrsec/prompt-injection-defenses
Every practical and proposed defense against prompt injection.
deadbits/vigil-llm
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
automorphic-ai/aegis
Self-hardening firewall for large language models
langgptai/Awesome-Multimodal-Prompts
Prompts of GPT-4V & DALL-E3 to full utilize the multi-modal ability. GPT4V Prompts, DALL-E3 Prompts.
yunwei37/prompt-hacker-collections
prompt attack-defense, prompt Injection, reverse engineering notes and examples | 提示词对抗、破解例子与笔记
dropbox/llm-security
Dropbox LLM Security research code and results
shell-nlp/gpt_server
gpt_server是一个用于生产级部署LLMs、Embedding、Reranker、ASR、TTS、文生图、图片编辑和文生视频的开源框架。
liu00222/Open-Prompt-Injection
This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses
lakeraai/pint-benchmark
A benchmark for prompt injection detection systems.
TrustAI-laboratory/Learn-Prompt-Hacking
This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking course.
kereva-dev/kereva-scanner
Code scanner to check for issues in prompts and LLM calls
NullTrace-Security/Exploiting-AI
This class is a broad overview and dive into Exploiting AI and the different attacks that exist, and best practice strategies.
pasquini-dario/project_mantis
Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks
HumanCompatibleAI/tensor-trust
A prompt injection game to collect data for robust ML research
wearetyomsmnv/Awesome-LLMSecOps
LLM | Security | Operations in one github repo with good links and pictures.
gdalmau/lakera-gandalf-solutions
My inputs for the LLM Gandalf made by Lakera
ZapDos7/lakera-gandalf
My solutions for Lakera's Gandalf
GPTSafe/PromptGuard
Build production ready apps for GPT using Node.js & TypeScript
sinanw/llm-security-prompt-injection
This project investigates the security of large language models by performing binary classification of a set of input prompts to discover malicious prompts. Several approaches have been analyzed using classical ML algorithms, a trained LLM model, and a fine-tuned LLM model.
jailbreakme-xyz/jailbreak
jailbreakme.xyz is an open-source decentralized app (dApp) where users are challenged to try and jailbreak pre-existing LLMs in order to find weaknesses and be rewarded. 🏆
LostOxygen/llm-confidentiality
Whispers in the Machine: Confidentiality in LLM-integrated Systems
grepstrength/WideOpenAI
Short list of indirect prompt injection attacks for OpenAI-based models.
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package
microsoft/gandalf_vs_gandalf
Turning Gandalf against itself. Use LLMs to automate playing Lakera Gandalf challenge without needing to set up an account with a platform provider.
peluche/deck-of-many-prompts
Manual Prompt Injection / Red Teaming Tool
SemanticBrainCorp/SemanticShield
The Security Toolkit for managing Generative AI(especially LLMs) and Supervised Learning processes(Learning and Inference).
lakeraai/chainguard
Guard your LangChain applications against prompt injection with Lakera ChainGuard.