Content Safety Private Preview Documentation informational

📒 Overview

This documentation site is structured into the following sections.

  • Multimodal API Documentation shares the latest update on performing content moderation on multi-modal content.
  • Annotation API Documentation introduces a new capability to perform adapted annotation on harmful content according to specific guidelines.
  • Ungroundedness (Hallucination) Detection API detects ungroundedness generated by the large language models (LLMs). Ungroundedness refers to instances where the LLMs produce information that is non-factual or inaccurate from what was present in the source materials provided by the users.

💬 We're here to help!

If you get stuck, shoot us an email or use the feedback widget on the upper right of any page. We're excited you're here!