Answer all frequently asked questions!
- This is a Frequently Asked Questions(FAQ) answering chatbot, customly built to answer any question related to IEEE-VIT.
- This project is built from scratch without using pre existing platforms such as Dialogflow or Amazon Lex.
- Further, an API is created for the same using FastAPI.
- Two separate models were built for intent classification and are available in model_training directory:
- Both the models are independently working and perform the same action of classifying input query into fixed set intents.
- The Data used to train both the TensorFlow Model and PyTorch Model was same and customly generated as per requirements.
- The Data had to be augmented before training the models as manually generated data was insufficient.
- The Data Directory inside model_training directory has notebook for augmenting and augmented data as well.
Before proceeding make sure you have Python 3.8 or above installed.
- Clone this repository:
git clone https://github.com/IEEE-VIT/IEEE-FAQ-Chatbot.git
- Create a python virtual environment and activate it:
pip install virtualenv
python -m venv myenv
myenv\Scripts\activate
- Install the requirements:
pip install -r requirements.txt
*Important note: for installing torch 1.7.1, direct wheel file was used in requirements.txt which is python version and OS specific. The used .whl
file is for Python 3.8 and Linux OS. If you have a different version of Python or OS installed, you need to replace the existing .whl
file with the correct one in requirements.txt. All other files for different Python versions and OS can be found here. Make sure to get correct torch version (1.7.1) as well for PyTorch model.
- Start the server on localhost:
uvicorn main:app --reload
Once the server has started successfully, go to http://127.0.0.1:8000/docs to test your API. 8000 is the default port, which can be changed in main.py to any unused port.
To install via docker, make sure you have docker desktop installed if you are working on windows.
- Clone this repository:
git clone https://github.com/IEEE-VIT/IEEE-FAQ-Chatbot.git
- cd into root directory and build the image using Dockerfile:
docker build -t myimage .
- Once the image has been built, start the container:
docker run -d --name mycontainer -p 8000:8000 myimage
This will start the container at localhost port 8000. Go to http://127.0.0.1:8000/docs to test your API. 8000 is the default port, which can be changed in Dockerfile to any unused port.
- Fork this repository.
- Clone the forked repository in your local machine and install it using above mentioned steps.
- Create and checkout a new branch using:
git branch -m new-feature
You can use any branch name you like!
- make all the changes you think will help improve this project! Now, open a PR by following these commands:
git add .
git commit -m "a short description about your commit"
git push -u origin new-feature
- Now, go to your browser and open the forked repo and then raise a PR to
master
branch of this repo.
That's all. Now just hang tight while our maintainers review your PR and merge and close them!
If you are new to contributing, checkout contributing guidelines. Do checkout issues labelled as hacktoberfest
for some goodies and a T-shirt!
*Note: Training the models is not required for installing and using the API with above mentioned steps. This is because, both tensorflow_model.py and pytorch_model.py directly load the trained and saved model with .h5
and .pt
extensions respectively.
If you wish to see the training code, it can be found inside model_training directory.
Consider leaving a ⭐ if you liked the project and organization :)