llm-observability
There are 14 repositories under llm-observability topic.
langfuse/langfuse
🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with OpenTelemetry, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23
comet-ml/opik
Debug, evaluate, and monitor your LLM applications, RAG systems, and agentic workflows with comprehensive tracing, automated evaluations, and production-ready dashboards.
Helicone/helicone
🧊 Open source LLM observability platform. One line of code to monitor, evaluate, and experiment. YC W23 🍓
Agenta-AI/agenta
The open-source LLMOps platform: prompt playground, prompt management, LLM evaluation, and LLM observability all in one place.
lmnr-ai/lmnr
Laminar - open-source all-in-one platform for engineering AI products. Create data flywheel for your AI app. Traces, Evals, Datasets, Labels. YC S24.
palico-ai/palico-ai
Build, Improve Performance, and Productionize your LLM Application with an Integrated Framework
langfuse/oss-llmops-stack
Modular, open source LLMOps stack that separates concerns: LiteLLM unifies LLM APIs, manages routing and cost controls, and ensures high-availability, while Langfuse focuses on detailed observability, prompt versioning, and performance evaluations.
radicalbit/radicalbit-ai-monitoring
A comprehensive solution for monitoring your AI models in production
myscale/myscale-telemetry
Open-source observability for your LLM application.
teilomillet/hapax
The reliability layer between your code and LLM providers.
langfuse/langfuse-java
🪢 Auto-generated Java Client for Langfuse API
AndrMoura/streamlit-chatbot-analytics
Streamlit-based chatbot leveraging Ollama via LangChain and PostHog-LLM for advanced logging and monitoring
modelmetry/modelmetry-sdk-js
The Modelmetry JS/TS SDK allows developers to easily integrate Modelmetry’s advanced guardrails and monitoring capabilities into their LLM-powered applications.
Artificia11nte11igence/Catalyst
Python SDK for Agent AI Observability, Monitoring and Evaluation Framework. Includes features like AI Agent, LLM and tools tracing, debugging multi-agentic system, self-hosted dashboards and advanced analytics with timeline and execution graph view.