Pinned Repositories
detect_pii
Guardrails AI: PII Filter - Validates that any text does not contain any PII
detect_prompt_injection
A Guardrials Hub validator used to detect if prompt injection is present
guardrails
Adding guardrails to large language models.
guardrails-api
Docker compose stub of Guardrails as a Service
guardrails-js
A Javascript wrapper for guardrails-ai
guardrails-lite-server
A bare minimum deployment of guardrails as a service.
guardrails_pii
provenance_llm
Guardrails AI: Provenance LLM - Validates that the LLM-generated text is supported by the provided contexts.
unusual_prompt
A Guardrails AI input validator that detects if the user is trying to jailbreak an LLM using unusual prompting techniques that involve jailbreaking and tricking the LLM
validator-template
A test validator repo that includes just the regex validator
Guardrails AI's Repositories
guardrails-ai/validator-template
A test validator repo that includes just the regex validator
guardrails-ai/guardrails-lite-server
A bare minimum deployment of guardrails as a service.
guardrails-ai/unusual_prompt
A Guardrails AI input validator that detects if the user is trying to jailbreak an LLM using unusual prompting techniques that involve jailbreaking and tricking the LLM
guardrails-ai/shieldgemma-2b
guardrails-ai/lowercase
Guardrails AI: Lower case validator - Validates that a value is lower case
guardrails-ai/regex_match
guardrails-ai/similar_to_document
Guardrails AI: Similar to Document - Validates that a value is similar to the document
guardrails-ai/arize-js
guardrails-ai/ban_list
A Guardrails Validator that allows you to ban certain keywords using fuzzy matching
guardrails-ai/endpoint_is_reachable
Guardrails AI: Endpoint is Reachable - Validates that a value is a reachable URL
guardrails-ai/ends_with
Guardrails AI: Ends with validator - Validates that a list or a string ends with a given value
guardrails-ai/one_line
Guardrails AI: One Line validator - Validates that a value is a single line, based on whether or not the output has a newline character
guardrails-ai/reading_time
Guardrails AI: Reading time validator - Validates that the a string can be read in less than a certain amount of time.
guardrails-ai/relevancy_evaluator
guardrails-ai/response_evaluator
A Guardrails AI validator that validates LLM responses by re-prompting the LLM to self-evaluate
guardrails-ai/saliency_check
Guardrails AI: Saliency check - Checks that the summary covers the list of topics present in the document
guardrails-ai/two_words
Guardrails AI: Two words validator - Validates that a value is two words
guardrails-ai/uppercase
Guardrails AI: Upper case - Validates that a value is upper case
guardrails-ai/valid_address
A Guardrails AI validator that validates whether a given address is valid
guardrails-ai/valid_json
Guardrails AI: Valid JSON - Validates that a value is parseable as valid JSON.
guardrails-ai/valid_range
Guardrails AI: Valid range - validates that a value is within a range
guardrails-ai/contains_string
guardrails-ai/continuous_integration_and_deployment_aws_template
guardrails-ai/gliner_pii
guardrails-ai/hub-types
Data structures used in the Guardrails Hub
guardrails-ai/internal_domains
guardrails-ai/llm_critic
A Guardrails AI validator that validates LLM responses by grading + evaluating them against a given set of criteria / metrics
guardrails-ai/quotes_price
Check if the generated text contains a price quote in the given currency
guardrails-ai/rag-llm-prompt-evaluator-guard
guardrails-ai/sky_validator