The LLM-RAG (Large Language Model with Retrieval-Augmented Generation) enhances conversational AI through advanced memory management and data retrieval. It integrates PersistentStore, HazyMemory, and a Conversational Memory Buffer for efficient, context-aware interactions.
- Stores entity relationships and states in an XML format for graph-like compression and flexible data representation.
- Optimizes data compression and querying, ideal for real-time interactions.
- A vector database providing broader context from past interactions, improving response relevance and coherence.
- Maintains the 50 most recent dialogues for continuous conversation flow.
- Prepare API Key: Place your OpenAI API key in
secrets.txt
within the project's root directory. - Install Dependencies: Run
pip install -r requirements.txt
. - Start the Bot: Execute
python3 main.py
orpython main.py
if Python 3 is not explicitly required.
Designed for gaming and simulation, this architecture allows for dynamic entity interaction and state management, enabling immersive experiences.
Ensure your OpenAI API key is set in secrets.txt
. Install required libraries with pip install -r requirements.txt
, then start the chat bot using python3 main.py
.
The LLM-RAG architecture sets new standards in conversational AI, leveraging innovative approaches in memory management and data storage for responsive and context-aware interactions.