Question answering system built with vector dbs and LLMs.
Part of the content is implemented with reference to michaelliao/llm-embedding-sample
- Python 3.10
- Docker
-
Clone this repo
-
Install requirements with
pip install -r requirements.txt -
Startup PostgreSQL with Docker
docker run -d \ --rm \ --name pgvector \ -p 5432:5432 \ -e POSTGRES_PASSWORD=password \ -e POSTGRES_USER=postgres \ -e POSTGRES_DB=postgres \ -e PGDATA=/var/lib/postgresql/data/pgdata \ -v /path/to/llm-embedding-qa/pg-data:/var/lib/postgresql/data \ -v /path/to/llm-embedding-qa/pg-init-script:/docker-entrypoint-initdb.d \ ankane/pgvector:latest
NOTE: replace /path/to/... with real path.
-
Run
python main.py, editconfig.yamlto set yourapi_keyof OpenAI. -
Put your
markdownformat documents indocsfolder.- There are the wiki files of QChatGPT in
docs_examplesfolder.
- There are the wiki files of QChatGPT in
-
Run
python main.pyagain, it will automatically build the vector database and start the server.
GET /askcontent: the content of the questionstrict: (Optional) skip LLM request ifstrict=trueand no related answer found in vector db