A semi-opinionanted RAG framework.
R2R was conceived to bridge the gap between experimental RAG models and robust, production-ready systems. Our semi-opinionated framework cuts through the complexity, offering a straightforward path to deploy, adapt, and maintain RAG pipelines in production. We prioritize simplicity and practicality, aiming to set a new industry benchmark for ease of use and effectiveness.Install R2R directly using pip
:
# use the `'r2r[all]'` to download all required deps
pip install 'r2r[parsing]'
# setup env
export OPENAI_API_KEY=sk-...
The project includes several basic examples that demonstrate application deployment and interaction:
-
app.py
: This example runs the main application, which includes the ingestion, embedding, and RAG pipelines served via FastAPI.uvicorn r2r.examples.basic.app:app
-
run_client.py
: This example should be run after starting the main application. It demonstrates uploading text entries as well as a PDF to the local server with the python client. Further, it shows document and user-level vector management with built-in features.python -m r2r.examples.basic.run_client
-
run_pdf_chat.py
: An example demonstrating upload and chat with a more realistic pdf.# Ingest pdf python -m r2r.examples.pdf_chat.run_demo ingest # Ask a question python -m r2r.examples.pdf_chat.run_demo search "What are the key themes of Meditations?"
-
web
: A web application which is meant to accompany the framework to provide visual intelligence.cd $workdir/web && pnpm install # Serve the web app pnpm dev
slim_demo.mp4
Follow these steps to ensure a smooth setup:
-
Install Poetry:
- Before installing the project, make sure you have Poetry on your system. If not, visit the official Poetry website for installation instructions.
-
Clone and Install Dependencies:
- Clone the project repository and navigate to the project directory:
git clone git@github.com:SciPhi-AI/r2r.git cd r2r
- Copy the
.env.example
file to.env
. This file is in the main project folder:
cp .env.example .env # Add secrets, `OPENAI_API_KEY` at a minimum vim .env
- Install the project dependencies with Poetry:
# See pyproject.toml for available extras # use "all" to include every optional dependency poetry install --extras "parsing"
- Execute with poetry run:
poetry run python -m r2r.examples.pdf_chat.run_demo ingest
- Clone the project repository and navigate to the project directory:
-
Configure Environment Variables:
- You need to set up cloud provider secrets in your
.env
. At a minimum, you will need an OpenAI key. - The framework currently supports PostgreSQL (locally), pgvector and Qdrant with plans to extend coverage.
- You need to set up cloud provider secrets in your
- 🚀 Rapid Deployment: Facilitates a smooth setup and development of production-ready RAG systems.
- ⚖️ Flexible Standarization:
Ingestion
,Embedding
, andRAG
with properObservability
. - 🧩 Easy to modify: Provides a structure that can be extended to deploy your own custom pipelines.
- 📦 Versioning: Ensures your work remains reproducible and traceable through version control.
- 🔌 Extensibility: Enables a quick and robust integration with various VectorDBs, LLMs and Embeddings Models.
- 🤖 OSS Driven: Built for and by the OSS community, to help startups and enterprises to quickly build with RAG.
- 📝 Deployment Support: Available to help you build and deploy your RAG systems end-to-end.
The framework primarily revolves around three core abstractions:
-
The Ingestion Pipeline: Facilitates the preparation of embeddable 'Documents' from various data formats (json, txt, pdf, html, etc.). The abstraction can be found in
ingestion.py
. -
The Embedding Pipeline: Manages the transformation of text into stored vector embeddings, interacting with embedding and vector database providers through a series of steps (e.g., extract_text, transform_text, chunk_text, embed_chunks, etc.). The abstraction can be found in
embedding.py
. -
The RAG Pipeline: Works similarly to the embedding pipeline but incorporates an LLM provider to produce text completions. The abstraction can be found in
rag.py
.
Each pipeline incorporates a logging database for operation tracking and observability.