/ragtime

A template for a rag Agent with semantic search capabilities within an s3 bucket

Primary LanguageMakefileMIT LicenseMIT

Eidolon S3 Rag Recipe

In this recipe we have created a RAG chatbot powered by documents living in s3.

The documents are parsed and embedded on the fly which is valuable if you have a small body of data that may change frequently. To process a large body of data, you will want to set up an ingestion pipeline.

Core Concepts

  • Multi-agent communication
  • Sub-component customization
  • Dynamic embedding management

Agents

Conversational Agent

The user facing copilot. Ask this agent questions and it use the llm to provide answers while reaching out to the S3 Search Agent as needed for relevant documents as needed assistance of the repo search agent.

FS Search Agent

Handles loading, embedding, and re-embedding documents ensuring they are up-to-date.

Translates queries into a vector search query and returns the top results.

Directory Structure

  • resources: This directory contains additional resources for the project. An example agent is provided for reference.
  • components: This directory is where any custom code should be placed.

Running the Server

First you need to clone the project and navigate to the project directory:

git clone https://github.com/eidolon-ai/eidolon-s3-rag.git
cd agent-machine

Then run the server using docker, use the following command:

make docker-serve

The first time you run this command, you may be prompted to enter credentials that the machine needs to run (ie, OpenAI API Key).

This command will download the dependencies required to run your agent machine and start the Eidolon http server in "dev-mode".

If the server starts successfully, you should see the following output:

Starting Server...
INFO:     Started server process [34623]
INFO:     Waiting for application startup.
INFO - Building machine 'local_dev'
...
INFO - Server Started in 1.50s

Running the Client

Ensure you have the eidolon CLI

python3 -m venv .venv
. .venv/bin/activate
pip3 install 'eidolon-ai-client[cli]' -U

Create an agent to talk to:

export PID=$(eidolon-cli processes create --agent ragtime); echo $PID

Send it prompts:

eidolon-cli actions converse --process-id $PID --body "Summarize the topic of every document you are aware of."