A bot that accepts PDF docs and lets you ask questions on it.
The LLMs are downloaded and served via Ollama.
- Docker (with docker-compose)
- Python (for development only)
Define a docker-compose.yml
by adding the following contents into the file.
services:
ollama:
image: ollama/ollama
ports:
- 11434:11434
volumes:
- ~/ollama:/root/.ollama
networks:
- net
app:
image: amithkoujalgi/pdf-bot:1.0.0
ports:
- 8501:8501
environment:
- OLLAMA_API_BASE_URL=http://ollama:11434
- MODEL=orca-mini
networks:
- net
networks:
net:
Then run:
docker-compose up
When the server is up and running, access the app at: http://localhost:8501
Note:
- It takes a while to start up since it downloads the specified model for the first time.
- If your hardware does not have a GPU and you choose to run only on CPU, expect high response time from the bot.
- Only Nvidia is supported as mentioned in Ollama's documentation. Others such as AMD isn't supported yet. Read how to use GPU on Ollama container and docker-compose.
Image on DockerHub: https://hub.docker.com/r/amithkoujalgi/pdf-bot
Demo:
PDF.Bot.Demo.mp4
Sample PDFs:
- Expose model params such as
temperature
,top_k
,top_p
as configurable env vars
Thanks to the incredible Ollama, Langchain and Streamlit projects.