The easiest way to use Agentic RAG in any enterprise.
As simple to configure as OpenAI's custom GPTs, but deployable in your own cloud infrastructure using Docker. Built using LlamaIndex.
Get Started · Endpoints · Deployment · Contact
To run, start a docker container with our image:
docker run -p 8000:8000 ragapp/ragapp
Then, access the Admin UI at http://localhost:8000/admin to configure your RAGapp.
You can use hosted AI models from OpenAI or Gemini, and local models using Ollama.
Note: To avoid running into any errors, we recommend using the latest version of Docker and (if needed) Docker Compose.
The docker container exposes the following endpoints:
- Admin UI: http://localhost:8000/admin
- Chat UI: http://localhost:8000
- API: http://localhost:8000/docs
Note: The Chat UI and API are only functional if the RAGapp is configured.
RAGapp doesn't come with any authentication layer by design. You'll have to protect the /admin
and /api/management
paths in your cloud environment to secure your RAGapp.
This step heavily depends on your cloud provider and the services you use.
A common way to do so using Kubernetes is to use an Ingress Controller.
Later versions of RAGapp will support to restrict access based on access tokens forwarded from an API Gateway or similar.
We provide a docker-compose.yml
file to make deploying RAGapp with Ollama and Qdrant easy in your own infrastructure.
Using the MODEL
environment variable, you can specify which model to use, e.g. llama3
:
MODEL=llama3 docker-compose up
If you don't specify the MODEL
variable, the default model used is phi3
, which is less capable than llama3
but faster to download.
Note: The
setup
container in thedocker-compose.yml
file will download the selected model into theollama
folder - this will take a few minutes.
Using the OLLAMA_BASE_URL
environment variables, you can specify which Ollama host to use.
If you don't specify the OLLAMA_BASE_URL
variable, the default points to the Ollama instance started by Docker Compose (http://ollama:11434
).
If you're running a local Ollama instance, you can choose to connect it to RAGapp by setting the OLLAMA_BASE_URL
variable to http://host.docker.internal:11434
:
MODEL=llama3 OLLAMA_BASE_URL=http://host.docker.internal:11434 docker-compose up
Note:
host.docker.internal
is not available on Linux machines, you'll have to use172.17.0.1
instead. For details see Issue #78.
Using a local Ollama instance is necessary if you're running RAGapp on macOS, as Docker for Mac does not support GPU acceleration.
To enable Docker access to NVIDIA GPUs on Linux, install the NVIDIA Container Toolkit.
It's easy to deploy RAGapp in your own cloud infrastructure. Customized K8S deployment descriptors are coming soon.
poetry install --no-root
make build-frontends
make dev
Note: To check out the admin UI during development, please go to http://localhost:3000/admin.
Questions, feature requests or found a bug? Open an issue or reach out to marcusschiesser.