/webhook-llm

Asynchronous REST API service use Litestar for processing webhook requests using LLM with conversation history and rate limiting. Not intended for production use.

Primary LanguagePython

Webhook LLM Service

A REST API service that processes incoming webhook requests using LLM (Large Language Model) and sends generated responses to specified endpoints.

Features

  • Asynchronous webhook processing with Redis message broker
  • OpenAI GPT integration for text generation
  • Conversation history tracking with PostgreSQL
  • Rate limiting for LLM API calls
  • Docker containerization
  • OpenAPI documentation
  • Comprehensive test coverage

Tech Stack

  • Python 3.12
  • Litestar (FastAPI-like web framework)
  • PostgreSQL (conversation storage)
  • Redis (message broker)
  • OpenAI API (LLM provider)
  • Docker & Docker Compose
  • Poetry (dependency management)
  • Alembic (database migrations)
  • Pytest (testing)

Prerequisites

  • Docker and Docker Compose
  • Python 3.12+
  • Poetry
  • OpenAI API key

Installation

  1. Clone the repository:
git clone https://github.com/ACK1D/webhook-llm.git
cd webhook-llm
  1. Install dependencies:
poetry install
  1. Create a .env file with the following variables:
OPENAI_API_KEY=<your_openai_api_key>
DATABASE_URL=<your_database_url>
REDIS_URL=<your_redis_url>
RPM_LIMIT=<your_rpm_limit>

or use the .env.example file as a template and fill in the missing values.

cp .env.example .env
  1. Start the services:
docker compose up -d

API Endpoints

Health Check

GET / Response:

{"status": "ok", "service": "LLM Webhook Service", "version": "0.1.0"}

Process Webhook

POST /webhook Request:

{
    "message": "Hello, how are you?",
    "callback_url": "https://example.com/callback",
}

Response:

{
    "status": "accepted",
    "message": "Request placed in queue",
    "conversation_id": "904d270d-69ba-4057-b712-b349d4ff0d5a"
}

Automatically generated OpenAPI-based documentation

  • GET /schema (ReDoc)
  • GET /schema/swagger (Swagger)
  • GET /schema/elements (Elements)
  • GET /schema/rapidoc (RapiDoc)

Architecture

The service consists of three main components:

  1. API Service: Handles incoming webhook requests and queues them in Redis
  2. Worker Service: Processes queued messages, calls OpenAI API, and sends responses
  3. Database: Stores conversation history and messages

Development

  1. Run tests:
poetry run pytest
  1. Run API locally:
poetry run python -m app.main
  1. Run Worker locally:
poetry run python -m app.worker

Configuration

The service can be configured using environment variables:

  • OPENAI_API_KEY: OpenAI API key
  • OPENAI_MODEL: Model to use (default: gpt-3.5-turbo)
  • RPM_LIMIT: Rate limit for OpenAI API calls
  • REDIS_URL: Redis connection URL
  • DATABASE_URL: PostgreSQL connection URL
  • DEBUG: Enable debug mode

Project Structure

  • app/: Application code
  • tests/: Test code
  • docker/: Dockerfiles
  • compose.yml: Docker Compose file
  • alembic.ini: Alembic configuration
  • app/migrations/: Database migrations
  • app/config.py: Configuration settings
  • app/models/: Database models
  • app/services/: Service code
  • app/main.py: Main application entry point
  • app/worker.py: Worker entry point