.
├── Dockerfile
├── README.md
├── docker-compose.yml
├── main.py
├── requirements.txt
├── secrets
│ ├── google_api_key.secret
│ ├── qdrant_api_key.secret
│ └── qdrant_url.secret
├── src
│ ├── __init__.py
│ ├── api
│ │ ├── __init__.py
│ │ └── endpoints
│ │ ├── __init__.py
│ │ └── question_answer.py
│ ├── config
│ │ ├── __init__.py
│ │ └── settings.py
│ ├── data
│ │ ├── __init__.py
│ │ └── web_scraper.py
│ ├── frontend
│ │ ├── __init__.py
│ │ ├── logo_horizontal.png
│ │ └── streamlit.py
│ ├── models
│ │ ├── __init__.py
│ │ └── embeddings
│ │ └── embedding_model.py
│ └── services
│ ├── __init__.py
│ ├── prompt_service.py
│ └── vector_store_service.py
└── tree.txt
10 directories, 25 files
This project is a Question-Answering Assistant on Gigalogy built using FastAPI and Streamlit. It uses the Google Generative AI (Gemini API) and Qdrant for document retrieval to provide accurate and relevant answers to user queries.
/ask
:- Method: POST
- Input: JSON {"question": "Your question here"}
- Output: JSON {"answer": "Answer text", "sources": ["Source1", "Source2"]}
- Docker
- Docker Compose
-
Clone the repository:
git clone https://github.com/shakil1819/LLMs-RAG-with-GeminiPRO---API.git
-
Navigate to the project directory:
cd LLMs-RAG-with-GeminiPRO---API/
-
Run the following command to start the services:
docker compose up --build
-
The backend will run on port
8000
, and the frontend (Streamlit app) will run on port8501
.
- Environment Variables:
GOOGLE_API_KEY
: API key for Google Generative AI.QDRANT_API_KEY
: API key for Qdrant.QDRANT_URL
: URL of the Qdrant service.- Follow this structure in order to achieve secrets via
docker-compose.yml
├── secrets │ ├── google_api_key.secret │ ├── qdrant_api_key.secret │ └── qdrant_url.secret
- Open your web browser and go to
http://localhost:8501
&http://localhost:8000/docs
. - You will see a text box where you can type your question.
- After entering your question and pressing Enter, the system will fetch and display the most relevant answer along with the source URLs.