Pinned Repositories
detect_pii
Guardrails AI: PII Filter - Validates that any text does not contain any PII
detect_prompt_injection
A Guardrials Hub validator used to detect if prompt injection is present
guardrails
Adding guardrails to large language models.
guardrails-api
Docker compose stub of Guardrails as a Service
guardrails-internal
Adding guardrails to large language models.
guardrails-js
A Javascript wrapper for guardrails-ai
guardrails-lite-server
A bare minimum deployment of guardrails as a service.
guardrails_pii
unusual_prompt
A Guardrails AI input validator that detects if the user is trying to jailbreak an LLM using unusual prompting techniques that involve jailbreaking and tricking the LLM
validator-template
A test validator repo that includes just the regex validator
Guardrails AI's Repositories
guardrails-ai/guardrails
Adding guardrails to large language models.
guardrails-ai/guardrails-api
Docker compose stub of Guardrails as a Service
guardrails-ai/guardrails_pii
guardrails-ai/detect_pii
Guardrails AI: PII Filter - Validates that any text does not contain any PII
guardrails-ai/provenance_llm
Guardrails AI: Provenance LLM - Validates that the LLM-generated text is supported by the provided contexts.
guardrails-ai/llamaguard-7b
guardrails-ai/qa_relevance_llm_eval
Guardrails AI: QA Relevance LLM eval - Validates that an answer is relevant to the question asked by asking the LLM to self evaluate
guardrails-ai/toxic_language
Guardrails AI: Toxic language - Validates that the generated text is toxic
guardrails-ai/competitor_check
Guardrails AI: Competitor Check - Validates that LLM-generated text is not naming any competitors from a given list
guardrails-ai/guardrails-api-client
OpenAPI Specifications and scripts for generating SDKs for the various Guardrails services
guardrails-ai/nsfw_text
A Guardrails AI validator that detects inappropriate/ Not Safe For Work (NSFW) text during validation
guardrails-ai/provenance_embeddings
Guardrails AI: Provenance Embeddings - Validates that LLM-generated text matches some source text based on distance in embedding space
guardrails-ai/restricttotopic
Validator for GuardrailsHub to check if a text is related with a topic.
guardrails-ai/bias_check
guardrails-ai/french_toxic_language
guardrails-ai/gibberish_text
A Guardrails AI validator that checks whether an LLM-generated response contains gibberish
guardrails-ai/sensitive_topics
guardrails-ai/bert_toxic
guardrails-ai/cucumber_expression_match
guardrails-ai/detect_jailbreak
Prototype Jailbreak Detection Guard
guardrails-ai/financial_tone
guardrails-ai/high_quality_translation_validator
Fork of BrainLogic AI's validator
guardrails-ai/integrations-extras
Community developed integrations and plugins for the Datadog Agent.
guardrails-ai/interfaces
Shared interfaces defined in JSON Schema.
guardrails-ai/NeMo-Guardrails
NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
guardrails-ai/politeness_check
guardrails-ai/qa-prompt-relevance
guardrails-ai/responsiveness_check
A validator which ensures that a generated output answers the prompt given.
guardrails-ai/shieldgemma-2b
guardrails-ai/wiki_provenance
A Guardrails AI validator that detects hallucinations using Wikipedia as the source of truth