guardrails
There are 129 repositories under guardrails topic.
BoundaryML/baml
The AI framework that adds the engineering to prompt engineering (Python/TS/Ruby/Java/C#/Rust/Go compatible)
microsoft/presidio
An open-source framework for detecting, redacting, masking, and anonymizing sensitive data (PII) across text, images, and structured data. Supports NLP, pattern matching, and customizable pipelines.
NVIDIA-NeMo/Guardrails
NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
deepsense-ai/ragbits
Building blocks for rapid development of GenAI applications
maximhq/bifrost
Fastest LLM gateway (50x faster than LiteLLM) with adaptive load balancer, cluster mode, guardrails, 1000+ models support & <100 µs overhead at 5k RPS.
souvikmajumder26/Multi-Agent-Medical-Assistant
⚕️GenAI powered multi-agentic medical diagnostics and healthcare research assistance chatbot. 🏥 Designed for healthcare professionals, researchers and patients.
globalbao/awesome-azure-policy
A curated list of blogs, videos, tutorials, code, tools, scripts, and anything useful to help you learn Azure Policy - by @JesseLoudon
privacera/paig
PAIG (Pronounced similar to paige or payj) is an open-source project designed to protect Generative AI (GenAI) applications by ensuring security, safety, and observability.
dipampaul17/AgentGuard
Real-time guardrail that shows token spend & kills runaway LLM/agent loops.
aipotheosis-labs/gate22
Open-source MCP gateway and control plane for teams to govern which tools agents can use, what they can do, and how it’s audited—across agentic IDEs like Cursor, or other agents and AI tools.
aiplaybookin/novice-ChatGPT
ChatGPT API Usage using LangChain, LlamaIndex, Guardrails, AutoGPT and more
raga-ai-hub/raga-llm-hub
Framework for LLM evaluation, guardrails and security
openguardrails/openguardrails
OpenGuardrails: Developer-First Open-Source AI Security Platform - Comprehensive Security Protection for AI Applications
langwatch/langevals
LangEvals aggregates various language model evaluators into a single platform, providing a standard interface for a multitude of scores and LLM guardrails, for you to protect and benchmark your LLM models and pipelines.
arthur-ai/arthur-engine
Make AI work for Everyone - Monitoring and governing for your AI/ML
invariantlabs-ai/invariant-gateway
LLM proxy to observe and debug what your AI agents are doing.
xiangxinai/xiangxin-guardrails
Xiangxin Guardrails is an open-source, context-aware AI guardrails platform that provides protection against prompt injection attacks, content safety risks, and data leakage. It can be deployed as a security gateway or integrated via API, offering enterprise-grade, fully private deployment options.
whitecircle-ai/circle-guard-bench
First-of-its-kind AI benchmark for evaluating the protection capabilities of large language model (LLM) guard systems (guardrails and safeguards)
sebuzdugan/frai
Open-source toolkit for responsible AI: CLI + SDK to scan code, collect evidence, and generate model cards, risk files, evals, and RAG indexes.
enguard-ai/awesome-ai-guardrails
A curated list of materials on AI guardrails
IDinsight/ask-a-question
Trustworthy question-answering AI plugin for chatbots in the social sector with advanced content performance analysis.
codingforentrepreneurs/django-ai-agent
Learn how to create an AI Agent with Django, LangGraph, and Permit.
infralicious/awesome-service-control-policies
Awesome AWS service control policies (SCPs), Resource Control Policies (RCPs), and other organizational policies
presidio-oss/hai-guardrails
A TypeScript library providing a set of guards for LLM (Large Language Model) applications
AmenRa/GuardBench
A Python library for guardrail models evaluation.
FareedKhan-dev/agentic-guardrails
Layered guardrails to make agentic AI safer and more reliable.
benitomartin/github-issues-multiagent-intelligence
Agentic Github Issues Retrieval on Kubernetes
jagreehal/ai-sdk-guardrails
Middleware for the Vercel AI SDK that adds safety, quality control, and cost management to your AI applications by intercepting prompts and responses.
amazon-science/TurboFuzzLLM
TurboFuzzLLM: Turbocharging Mutation-based Fuzzing for Effectively Jailbreaking Large Language Models in Practice
taladari/rag-firewall
Client-side retrieval firewall for RAG systems — blocks prompt injection and secret leaks, re-ranks stale or untrusted content, and keeps all data inside your environment.
SSK-14/chatbot-guardrails
💂🏼 Build your Documentation AI with Nemo Guardrails
Harras3/Enterprise-Grade-RAG
This is a RAG based chatbot in which semantic cache and guardrails have been incorporated.
aimonlabs/aimon-python-sdk
This repo hosts the Python SDK and related examples for AIMon, which is a proprietary, state-of-the-art system for detecting LLM quality issues such as Hallucinations. It can be used during offline evals, continuous monitoring or inline detection. We offer various model quality metrics that are fast, reliable and cost-effective.
NVIDIA-AI-Blueprints/securing-agentic-ai-developer-day
Securing Agentic AI Developer Day shows developers how to take an agentic AI reference workflow to production securely.
matank001/copilot-agents-guard
LLM-as-a-Judge security layer for Microsoft Copilot Studio agents
thrivewithai/langchain-fixie-marvin
We compared LangChain, Fixie, and Marvin