Example code for a basic Long Term Memory Chatbot using Qdrant and a conversation history list.
If you find this code useful, consider checking out my main Ai Assistant project: https://github.com/libraryofcelsus/Aetherius_AI_Assistant
If you want more code tutorials like this, follow me on github and youtube: https://www.youtube.com/@LibraryofCelsus
(Channel isn't launched yet, I have multiple scripts like this written, but am still working on a video production format. Subscribe for its Launch!)
In-Depth Code Tutorials in a documentation format available at: https://www.libraryofcelsus.com/research/public/code-tutorials/
- If using Qdrant Cloud copy their Api key and Url to the corresponding .txt files.
Qdrant Cloud Link: https://qdrant.to/cloud
To use a local Qdrant server, first install Docker: https://www.docker.com/, then see: https://github.com/qdrant/qdrant/blob/master/QUICK_START.md
Once the local Qdrant server is running, it should be auto detected by the script. - Install Git
- Install Python 3.10.6, Make sure you add it to PATH
- Open Git Bash Program
- Run git clone: git clone https://github.com/libraryofcelsus/Basic-Long-Term-Memory-Chatbot.git
- Open the Command Line as admin and navigate to the project install folder with cd
- Create a virtual environment: python -m venv venv
- Activate the virtual enviornment with: .\venv\Scripts\activate
- Install the requirements with pip install -r requirements.txt
- Edit and set your username and chatbot name in the .txt files
- Edit and set your main prompt and greeting in the .txt files
- For Oobabooga: Install the Oobabooga Web-Ui. This can be done with a one-click installer found on their Github page: https://github.com/oobabooga/text-generation-webui. Then launch the Web-Ui and navigate to the sessions tab, click both Api boxes and then click apply and restart. Now navigate to the models tab and enter: "TheBloke/Llama-2-7b-Chat-GPTQ" or "TheBloke/Llama-2-13B-chat-GPTQ". (If using cpu use the GMML Version) Once the model is downloaded, change the Model loader to ExLlama and set the gpu-split parameter to .5gb under your GPU's limit. Next set the max_seq_len to 4096.
- For OpenAI: Add your OpenAi Api Key to key_openai.txt
- Run the chatbot with python Script_Name.py
*Note, you will need to run .\venv\Scripts\activate every time you exit the command line to reactivate the virtual enviornment.
My Ai research is self-funded, consider supporting me if you find it useful :)
Discord: libraryofcelsus -> Old Username Style: Celsus#0262
MEGA Chat: https://mega.nz/C!pmNmEIZQ