guardrails

There are 129 repositories under guardrails topic.

  • BoundaryML/baml

    The AI framework that adds the engineering to prompt engineering (Python/TS/Ruby/Java/C#/Rust/Go compatible)

    Language:Rust6.7k26643326
  • microsoft/presidio

    An open-source framework for detecting, redacting, masking, and anonymizing sensitive data (PII) across text, images, and structured data. Supports NLP, pattern matching, and customizable pipelines.

    Language:Python6.1k75504839
  • NVIDIA-NeMo/Guardrails

    NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.

    Language:Python5.3k38477559
  • deepsense-ai/ragbits

    Building blocks for rapid development of GenAI applications

    Language:Python1.6k0244128
  • maximhq/bifrost

    Fastest LLM gateway (50x faster than LiteLLM) with adaptive load balancer, cluster mode, guardrails, 1000+ models support & <100 µs overhead at 5k RPS.

    Language:Go1.1k102
  • Multi-Agent-Medical-Assistant

    souvikmajumder26/Multi-Agent-Medical-Assistant

    ⚕️GenAI powered multi-agentic medical diagnostics and healthcare research assistance chatbot. 🏥 Designed for healthcare professionals, researchers and patients.

    Language:Python629131150
  • awesome-azure-policy

    globalbao/awesome-azure-policy

    A curated list of blogs, videos, tutorials, code, tools, scripts, and anything useful to help you learn Azure Policy - by @JesseLoudon

  • privacera/paig

    PAIG (Pronounced similar to paige or payj) is an open-source project designed to protect Generative AI (GenAI) applications by ensuring security, safety, and observability.

    Language:CSS2139156224
  • dipampaul17/AgentGuard

    Real-time guardrail that shows token spend & kills runaway LLM/agent loops.

    Language:JavaScript150208
  • aipotheosis-labs/gate22

    Open-source MCP gateway and control plane for teams to govern which tools agents can use, what they can do, and how it’s audited—across agentic IDEs like Cursor, or other agents and AI tools.

    Language:TypeScript14114
  • aiplaybookin/novice-ChatGPT

    ChatGPT API Usage using LangChain, LlamaIndex, Guardrails, AutoGPT and more

    Language:Jupyter Notebook1243017
  • raga-ai-hub/raga-llm-hub

    Framework for LLM evaluation, guardrails and security

    Language:Python1131315
  • openguardrails/openguardrails

    OpenGuardrails: Developer-First Open-Source AI Security Platform - Comprehensive Security Protection for AI Applications

    Language:Python678
  • langwatch/langevals

    LangEvals aggregates various language model evaluators into a single platform, providing a standard interface for a multitude of scores and LLM guardrails, for you to protect and benchmark your LLM models and pipelines.

    Language:Jupyter Notebook66249
  • arthur-engine

    arthur-ai/arthur-engine

    Make AI work for Everyone - Monitoring and governing for your AI/ML

    Language:Python617
  • invariantlabs-ai/invariant-gateway

    LLM proxy to observe and debug what your AI agents are doing.

    Language:Python53336
  • xiangxinai/xiangxin-guardrails

    Xiangxin Guardrails is an open-source, context-aware AI guardrails platform that provides protection against prompt injection attacks, content safety risks, and data leakage. It can be deployed as a security gateway or integrated via API, offering enterprise-grade, fully private deployment options.

    Language:Python4801013
  • circle-guard-bench

    whitecircle-ai/circle-guard-bench

    First-of-its-kind AI benchmark for evaluating the protection capabilities of large language model (LLM) guard systems (guardrails and safeguards)

    Language:Python452
  • sebuzdugan/frai

    Open-source toolkit for responsible AI: CLI + SDK to scan code, collect evidence, and generate model cards, risk files, evals, and RAG indexes.

    Language:JavaScript43
  • enguard-ai/awesome-ai-guardrails

    A curated list of materials on AI guardrails

    Language:Python42004
  • IDinsight/ask-a-question

    Trustworthy question-answering AI plugin for chatbots in the social sector with advanced content performance analysis.

    Language:Python356111
  • codingforentrepreneurs/django-ai-agent

    Learn how to create an AI Agent with Django, LangGraph, and Permit.

    Language:Jupyter Notebook331011
  • infralicious/awesome-service-control-policies

    Awesome AWS service control policies (SCPs), Resource Control Policies (RCPs), and other organizational policies

  • presidio-oss/hai-guardrails

    A TypeScript library providing a set of guards for LLM (Large Language Model) applications

    Language:TypeScript30334
  • GuardBench

    AmenRa/GuardBench

    A Python library for guardrail models evaluation.

    Language:Python25102
  • FareedKhan-dev/agentic-guardrails

    Layered guardrails to make agentic AI safer and more reliable.

    Language:Jupyter Notebook246
  • benitomartin/github-issues-multiagent-intelligence

    Agentic Github Issues Retrieval on Kubernetes

    Language:Python23005
  • jagreehal/ai-sdk-guardrails

    Middleware for the Vercel AI SDK that adds safety, quality control, and cost management to your AI applications by intercepting prompts and responses.

    Language:TypeScript17010
  • amazon-science/TurboFuzzLLM

    TurboFuzzLLM: Turbocharging Mutation-based Fuzzing for Effectively Jailbreaking Large Language Models in Practice

    Language:Python152
  • taladari/rag-firewall

    Client-side retrieval firewall for RAG systems — blocks prompt injection and secret leaks, re-ranks stale or untrusted content, and keeps all data inside your environment.

    Language:Python151
  • SSK-14/chatbot-guardrails

    💂🏼 Build your Documentation AI with Nemo Guardrails

    Language:Python14102
  • Harras3/Enterprise-Grade-RAG

    This is a RAG based chatbot in which semantic cache and guardrails have been incorporated.

    Language:HTML13103
  • aimonlabs/aimon-python-sdk

    This repo hosts the Python SDK and related examples for AIMon, which is a proprietary, state-of-the-art system for detecting LLM quality issues such as Hallucinations. It can be used during offline evals, continuous monitoring or inline detection. We offer various model quality metrics that are fast, reliable and cost-effective.

    Language:Python12425
  • NVIDIA-AI-Blueprints/securing-agentic-ai-developer-day

    Securing Agentic AI Developer Day shows developers how to take an agentic AI reference workflow to production securely.

    Language:Jupyter Notebook122
  • matank001/copilot-agents-guard

    LLM-as-a-Judge security layer for Microsoft Copilot Studio agents

    Language:Python10
  • thrivewithai/langchain-fixie-marvin

    We compared LangChain, Fixie, and Marvin

    Language:Python9100