/observability-prompots

LLM observability related prompts

Primary LanguagePythonApache License 2.0Apache-2.0

GitHub Repository: Observability LLM Prompts

Overview

Welcome to the Observability LLM Prompts GitHub repository! This repository contains a curated collection of LLM (large language model) prompts specifically designed for the observability domain. These prompts aim to facilitate efficient and effective interactions with AI-powered tools, such as OpenAI's GPT models, to assist users in understanding, monitoring, and troubleshooting complex systems.

The prompts in this repository cover various aspects of observability, including logs, metrics, traces, and alerts, as well as related concepts such as distributed systems, infrastructure monitoring, and application performance management.

Features

  • Curated collection of LLM prompts for observability domain
  • Covers a wide range of topics, including logs, metrics, traces, alerts, and more
  • Regularly updated with new prompts and improvements to existing ones
  • Open to community contributions and suggestions
  • Includes examples and use cases for each prompt

Contents

The repository is organized into the following sections:

  1. Logs: Prompts related to log analysis, log aggregation, and log management.
  2. Metrics: Prompts focused on system and application performance metrics, including collection, visualization, and interpretation.
  3. Traces: Prompts dealing with distributed tracing, trace analysis, and trace visualization.
  4. Alerts: Prompts about alerting systems, rules, and best practices for managing alerts.
  5. Infrastructure Monitoring: Prompts addressing various aspects of monitoring infrastructure components, such as servers, containers, and networks.
  6. Application Performance Management: Prompts related to monitoring and optimizing the performance of applications, including identifying bottlenecks and improving user experience.

LLM High Level Architecture Diagram

High Level LLM Conversations Architecture

Usage

To use the prompts in this repository, simply choose the relevant prompt from the appropriate section, and input it into your LLM-powered tool. The AI model should generate a response or provide assistance based on the prompt and context provided.

Running the Web Server

Add your own .env file containing the next fields:

OPENAI_API_KEY=sk-*****
API_TOKEN=hf_***

Running the endpoint: uvicorn main:app --host 0.0.0.0 --port 8000 --reload

Contributing

We encourage community contributions to help improve and expand the collection of prompts in this repository. If you have an idea for a new prompt or an improvement to an existing one, please submit a pull request or open an issue to discuss your suggestions.

License

This repository is licensed under the Apache 2.0 License. Please see the LICENSE file for more information.