/anil_ai_chat_llama_2

Anil AI is an innovative chat application that harnesses the robust capabilities of Generative AI, utilizing the advanced LLAMA 2 model for inference.

Primary LanguageJavaScriptBSD 4-Clause "Original" or "Old" LicenseBSD-4-Clause

logo

Anil AI

Anil AI is an innovative chat application that harnesses the robust capabilities of Generative AI, utilizing the advanced LLAMA 2 model for inference. This cutting-edge application is meticulously crafted with a React.js frontend and a Python FastAPI backend, ensuring a seamless and user-friendly interface.

Incorporating user authentication and authorization features, it provides a secure platform for users to interact. The entire application is dockerized, streamlining the deployment process and enhancing its efficiency. The process of model cloning, along with other steps, is fully automated, reducing manual intervention and increasing productivity.

At present, Anil AI serves as a foundational model, setting the stage for continuous evolution. It is poised for future enhancements and the release of newer versions, promising a trajectory of growth and improvement. This commitment to progress underscores the application's potential to revolutionize the realm of chat applications.

Demo Video

anil_ai_vid_1.mp4

Table of Contents

Prerequisites

Before you begin, ensure you have met the following requirements:

  • At least 8 GB GPU (16 GB is recommended)
  • Minimum 8 GB RAM (16 GB is recommended)
  • A 4-core CPU (8-core is better)
  • 20 GB HDD space (SSD is recommended)
  • Nvidia container toolkit
  • CUDA 12.0 or newer

For the creation of the production build

To establish your production environment, please adhere to the following procedures:

Proceed with the installation of the Nvidia container toolkit and synchronize it with Docker. However, if this configuration has already been accomplished, you may conveniently bypass this step.

  1. Refreshing System Packages
    sudo apt-get update
  2. Incorporating Nvidia-Container-Toolkit into the System
    sudo apt-get install -y nvidia-container-toolkit
  3. Configuring Nvidia-Container-Toolkit with Docker
    sudo nvidia-ctk runtime configure --runtime=docker
  4. Rebooting Docker Service to Implement Nvidia-Container-Toolkit Configurations
    sudo systemctl restart docker

Run the Application with Docker

  1. Generate a secret key using the openssl tool:

    openssl rand -hex 16
  2. Launch the Docker container, pulling the meticulously crafted image from Docker Hub, and utilize the source files for operation:

    sudo docker run -d -e SECRET_KEY='58d763bfae7c03f24b016c8f5401080f' --runtime=nvidia --gpus all --name your_container_name -p 0.0.0.0:8020:8020 anslin/anil_ai_chat_llama_2:v1.0.0

    Replace '58d763bfae7c03f24b016c8f5401080f' with your generated secret key.

    You can find the Docker image for this project at the following Docker Hub repository: anslin/anil_ai_chat_llama_2

  3. The credentials for the administrative user are autonomously generated and can be conveniently located within the log of the Docker container.

    sudo docker logs --follow your_container_name

Build and run the application with Docker

  1. Duplicate the repository by initiating the cloning process.

    git clone https://github.com/anslin-raj/anil_ai_chat_llama_2.git
    cd anil_ai_chat_llama_2
  2. Construct the Docker image utilizing the original source files.

    sudo docker build -t your_image_name .
  3. To successfully operate the Docker image, a secret key is indispensable. This confidential key can be conveniently generated by utilizing the openssl tool.

    openssl rand -hex 16
  4. Initiate the Docker container, utilizing the image that has been meticulously constructed from the Docker build process, and employ the source files for its operation.

    sudo docker run -d -e SECRET_KEY='58d763bfae7c03f24b016c8f5401080f' --runtime=nvidia --gpus all --name your_container_name -p 0.0.0.0:8020:8020 your_image_name

    Replace '58d763bfae7c03f24b016c8f5401080f' with your generated secret key.

  5. The credentials for the administrative user are autonomously generated and can be conveniently located within the log of the Docker container.

    sudo docker logs --follow your_container_name

Sample Log

                _ _            _____ 
    /\         (_) |     /\   |_   _|
   /  \   _ __  _| |    /  \    | |  
  / /\ \ | '_ \| | |   / /\ \   | |  
 / ____ \| | | | | |  / ____ \ _| |_ 
/_/    \_\_| |_|_|_| /_/    \_\_____| v1.0.0
                                     
[2023-12-28 20:11:23 +0000] [15] [INFO] Device using: cuda:0
[2023-12-28 20:11:23 +0000] [17] [INFO] Cloning model "anslin-raj/Llama-2-7b-chat-hf-8-bit" from Hugging Face...
[2023-12-28 20:12:14 +0000] [50] [INFO] Model "anslin-raj/Llama-2-7b-chat-hf-8-bit" cloned successfully.
[2023-12-28 20:12:15 +0000] [55] [INFO] Starting gunicorn 21.2.0
[2023-12-28 20:12:15 +0000] [55] [INFO] Listening at: http://0.0.0.0:8020 (55)
[2023-12-28 20:12:15 +0000] [55] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2023-12-28 20:12:15 +0000] [56] [INFO] Booting worker with pid: 56
[2023-12-28 20:12:17 +0000] [56] [INFO] Device using: cuda:0
[2023-12-28 20:12:34 +0000] [56] [INFO] Started server process [56]
[2023-12-28 20:12:34 +0000] [56] [INFO] Waiting for application startup.
[2023-12-28 20:12:34 +0000] [56] [INFO] Initiated the generation of the user table.
[2023-12-28 20:12:34 +0000] [56] [INFO] Generated username / password: admin / P7U0*xLSFnH?
[2023-12-28 20:12:34 +0000] [56] [INFO] User table generation completed.
[2023-12-28 20:12:34 +0000] [56] [INFO] Application startup complete.

In this sample log, the admin username is admin and the password is P7U0*xLSFnH?.

After building the docker image, you can access the docker app by navigating to http://127.0.0.1:8020/ in your web browser.

For the process of code development

The instructions provided below are designed to guide you through the process of setting up a copy of the project on your local machine. This will facilitate both development and testing purposes, ensuring a smooth and efficient workflow.

Prerequisites

  • Python v3.10.12
  • Node v18.16.0

Installation

Backend

  1. Duplicate the repository by initiating the cloning process
    git clone https://github.com/anslin-raj/anil_ai_chat_llama_2.git
    cd anil_ai_chat_llama_2
  2. Craft a virtual environment utilizing the Python
    python3 -m virtualenv venv
  3. Activate the virtual environment
    source venv/bin/activate
  4. Install the required python packages
    pip install -r requirements.txt
  5. To activate the logging feature for the development mode, you need to modify the DEBUG parameter in the config.py file.
    DEBUG = True
  6. Initiate the local development server of FastAPI
    uvicorn main:app --host 0.0.0.0 --port 8020 --reload

Frontend

  1. Navigate to the chat directory
    cd chat
  2. Install the required node packages
    npm install
  3. Please proceed to remove the commenting from the development URL located within the src/constants/Config.js file
    export const API_URL = "http://127.0.0.1:8020/api/v1";
  4. Run the local React.js for development
    npm start

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Support

If you encounter any issues or require further assistance, please raise an issue on GitHub.

License

This project is licensed under the terms of the license provided in the LICENSE file.

Contact

Anslin Raj - anslinracer@gmail.com

Screenshots

anil_ai_img_1

anil_ai_img_2

anil_ai_img_3

anil_ai_img_4

anil_ai_img_5