/long_term_memory_with_qdrant

RAG implementation for Ooba characters. dynamically spins up new qdrant vector DB and manages retrieval and commits for conversations based entirely on character name. Provides characters with access to past chat conversations

Primary LanguagePython

Chatbot Memory Extension

Purpose

Enhance bot engagement by enabling memory retention across conversations.

Functionality

Utilizes Qdrant vector database within a Docker container to store/retrieve previous user interactions. Workflow:

  1. Vector Generation: Converts new user input into vector embedding.
  2. Vector Storage: Stores vector in vector database, in a bot-named collection.
  3. Collection Creation: Creates the bot-named collection if non-existent.
  4. Memory Retrieval: Retrieves related past comments using recent comment's embedding cosine similarities.

Installation

  • Configure:

  • docker network: docker network create shared_network

  • Relocate:

    • docker-compose.yml: Launches Ooba server and Qdrant database.
    • .env: Specify Docker data persistence locations.
  • Optionally, adjust memory retrieval count via panel slider (max 10).

Initiate with:

docker-compose up

Build Ooba Docker image if needed:

docker-compose up --build

Usage

Each bot maintains individual memory. Duplicate bot settings for a new version with a distinct name (e.g., bot2) to create a new collection.

Future

Exploring event-driven coding and Gradio for potential enhancements like dropdown feature for memory sharing between bots and raw result display in panel. Could also add vdb configs into gui. Currently, view raw results by sending conversation to notebook.