/E2E-AI-Chatbot

Pipeline LLMs offline combine with own context for generating answer

Primary LanguagePythonMIT LicenseMIT

E2E-AI-Chatbot 🤖

Pipeline | Installation | User Interface | Model | Database | Search | Contact

Flake8 lint Stargazers MIT License LinkedIn

Pipeline

Current:

Next stage:

  • FastAPI & Gradio backend
  • Dockerize packages
  • Add UI ingest upload file
  • Add login page
  • Add docs
  • Nginx for http and https
  • K8s
  • CI/CD cloud (AWS/Azure)

Installation Requirements

  • Minimum CPU 8GiB RAM
  • Uncomment line 8 packages = [{include = "**"}] to use all internal packages (Passing Flake8)
  • Install packages and download GPT4All model by
  1. Run locally
chmod u+x ./setup.sh
bash ./setup.sh
  • Build MongoDB, Mongo Express, Logstash, Elasticsearch and Kibana
docker compose -f docker-compose-service.yml up
poetry run python app.py --host 0.0.0.0 --port 8071
  1. Run docker
docker compose up

User Interface App

poetry run python app.py --host 0.0.0.0 --port 8071

Run on: http://localhost:8071

  1. Chatbot:

  1. Ingest PDF:

(back to top)

Model

  1. GPT4ALL: Current best commercially licensable model based on GPT-J and trained by Nomic AI on the latest curated GPT4All dataset.

Database

  1. MongoDB Run on: http://localhost:27017
poetry run python src/ingest_database.py --mongodb-host "mongodb://localhost:27017/" --data-path "static/pdf/"

Mongo Compass (Windows)

Mongo Express

Run on: http://localhost:8081

Data Migration

Run on: http://localhost:9600

Search

  1. Elasticsearch & Kibana
poetry run python src/ingest_search.py --mongodb-host "mongodb://localhost:27017/" --es-host "http://localhost:9200/" --index_name "document"

Elasticsearch run on: http://localhost:9200

Kibana run on: http://localhost:5601

Contact

Impressive

(back to top)