/resume-chatbot-local-llm

Playing with RAG using Ollama, Langchain, and Streamlit. This project aims to demonstrate how a recruiter or HR personnel can benefit from a chatbot that answers questions regarding candidates.

Primary LanguagePythonMIT LicenseMIT

Resume Database Chatbot Using Local LLM

Playing with RAG using Ollama, Langchain, and Streamlit.

Project Description

This project aims to demonstrate how a recruiter or HR personnel can benefit from a chatbot that answers questions regarding candidates.

Data

https://www.kaggle.com/datasets/gauravduttakiit/resume-dataset/data

Reference

https://www.jeremymorgan.com/blog/generative-ai/how-to-run-llm-local-windows/ https://levelup.gitconnected.com/talk-to-your-csv-llama2-how-to-use-llama2-and-langchain-69012c5ff653 https://blog.duy-huynh.com/build-your-own-rag-and-run-them-locally/

https://ollama.ai/library

Usage

Written in Python 3.9.9. Some technologies used:

  • Ollama
  • Lanchain
  • Streamlit

To see the project in action, install the required libraries with

pip install langchain langchain-community chromadb fastembed streamlit streamlit_chat

and execute streamlit run app.py.

Meta

Ednalyn C. De Dios – @ecdedios

Distributed under the MIT license. See LICENSE for more information.

Contributing

  1. Fork it (https://github.com/ecdedios/resume-chatbot-local-llm/fork)
  2. Create your feature branch (git checkout -b feature/fooBar)
  3. Commit your changes (git commit -am 'Add some fooBar')
  4. Push to the branch (git push origin feature/fooBar)
  5. Create a new Pull Request

2024