/Healthopedia

Pan Gen AI hackathon

Primary LanguageJupyter Notebook

Won PAN IIT GEN-AI Hackathon seizing 3rd place among all IITs

Building ChatBot for Medical Healthcare using Mixtral 8x7b model, FAISS, LANGCHAIN, RAG Model, Question-Answering Pipeline

healthopedia

Table of Contents

  1. Introduction
  2. Tech Stack
  3. Functioning of model
  4. Features
  5. Quick Start
  6. Snippets
  7. Links
  8. More

Introduction

  • HealthopediaAI is an open-source AI health application —an intelligent virtual assistant designed to provide support and information related to health and wellness through natural language conversations.
  • Leveraging artificial intelligence (AI) and natural language processing (NLP) technologies, I aim to enhance the accessibility and efficiency of healthcare services by offering you a user-friendly prompt assisted interface(to help non technical users by showing other related prompts) for seeking medical information, advice, or assistance
  • Developed by Next.js application that highlights the key prompt assistance features along with a comprehensive CRUD based user profile & prompt post functionality sharing system utilizing a MongoDB database and implementing NextAuth authentication and used LLM Model Mixtral 8x7B for the functionality of the AI health website
  • We loaded the model from Hugging Face and used their token for Authorization
  • We Used RAG(Retrieval Augmented Generation) for better answering of the prompts using our dataset and the output generated by the LLM Model

Tech Stack

  • Language: Python, JavaScript
  • Deep Learning Libraries: Hugging Face Transformers
  • Text Embeddings and Vector Search: LangChain Community
  • Search Engine: FAISS (Facebook AI Similarity Search)
  • Natural Language Understanding and Generation: RAG (Retrieval-Augmented Generation)
  • Model Loading and Interaction: Hugging Face Model Hub
  • Web Framework: FastAPI,Next.js,TailwindCSS
  • DataBase: MongoDB, Mongoose
  • Authentication: NextAuth

Functioning of model

Data Information

  • The data consists of "The GALE of encyclopedia of Medicine"

Loading the document and splitting text:

  • Loading the pdf file using PyPDFLoader and extracted text from the pdf
  • Splitting the text into smaller chunks using langchain.text_splitter

Text Embeddings:

  • Text embeddings are generated using the HuggingFaceEmbeddings class from langchain_community.embeddings.
  • The model used for embedding is 'sentence-transformers/all-MiniLM-L6-v2', and it is configured to run on the CPU.

Converting to vectors and saving it

  • Converted text to vector using FAISS class from langchain_community.vectorstores and then saving the data

Creating a RAG Using LangChain and FAISS

  • It creates a retriever using a vector store (db1). The retriever is configured for similarity search, enabling the retrieval of documents similar to a given query.

  • Checking our vector database and see if it can retrieve similar chunks of content giving some prompt

  • It is basically fetching the output of the prompt from the vector database only

Building an LLM Chain for Question-Answering

  • Fetching the api token from the hugging face and loading model from hugging face
  • Generaing prompt templates and then creating llm chain for answering of our prompt

Creating a RAG Chain

Creating a rag chain so that the model has context to the query/prompt

  • A retriever is created from the vector store db1 using the as_retriever method.
  • The retriever is configured for similarity search, aiming to retrieve the top 20 documents similar to a given query.
  • A RAG (Retrieval-Augmented Generation) chain is constructed using the rag_chain variable.
  • The chain includes a retriever for providing context and a language model chain (llm_chain) for generating responses.
  • The RAG chain is invoked with a specific query ("Tell symptoms of glaucoma and tell how to cure it").
  • The retriever in the chain fetches relevant documents based on similarity to the query.
  • The language model chain (llm_chain) then generates responses based on the retrieved context and the given question.

Key Features

Modern Design with Glassmorphism Trend Style

A modern and visually appealing design, incorporating the glassmorphism trend style for a sleek and contemporary appearance.

Discover and Share AI Prompts

Allow users to discover AI prompts shared by the community and create their own prompts to share with the world.

AI enabled Diagnosis

Response generation by the advanced model developed by us, RAG chain using LangChain and FAISS for Question-Answering.

Edit and Delete Created Prompts

Users have the ability to edit their created prompts at any time and delete them when needed.

Profile Page

Each user gets a dedicated profile page showcasing all the prompts they've created, providing an overview of their contributions.

View Other People's Profiles

Users can explore the profiles of other creators to view the prompts they've shared, fostering a sense of community.

Copy to Clipboard

Implement a convenient "Copy to Clipboard" functionality for users to easily copy the AI prompts for their use.

Search Prompts by Specific Tag

Allow users to search for prompts based on specific tags, making it easier to find prompts related to specific topics.

Google Authentication using NextAuth

Enable secure Google authentication using NextAuth, ensuring a streamlined and trustworthy login experience.

Responsive Website

Develop a fully responsive website to ensure optimal user experience across various devices, from desktops to smartphones.

Code Architecture and Reusability

Implement best practices for code architecture and reusability to enhance maintainability and scalability.

Quick Start

Follow these steps to set up the project locally on your machine.

Prerequisites

Make sure you have the following installed on your machine:

Cloning the Repository

git clone https://github.com/GGarv/Healthopedia.git
cd Healthopedia

**Installation**

Install the project dependencies using npm:

```bash
npm install

Install the model packages from requirement.txt :

pip install -r requirements.txt

Set Up Environment Variables

Create a new file named .env in the root of your project and add the following content:

NEXTAUTH_URL=http://localhost:3000
NEXTAUTH_URL_INTERNAL=http://localhost:3000
NEXTAUTH_SECRET=
GOOGLE_ID=
GOOGLE_CLIENT_SECRET=
MONGODB_URI=

Replace the placeholder values with your actual credentials. You can obtain these credentials by signing up on these corresponding websites from Google Cloud Console, Cryptpool (for random Auth Secret), and MongoDB.

Running the Project

Open the healthopedia_backend folder, then run all the end.ipynb notebook to launch the server to fetch the model on feed.

Note: this is important without it the respnses will not be generated on the website.(its the same reason why prompt generation is not showing in https://healthopedia.vercel.app/ on github since vercel is unable to fetch the model server)

npm run dev

Open http://localhost:3000 in your browser to view the project.

globals.css
@import url("https://fonts.googleapis.com/css2?family=Inter:wght@100;200;300;400;500;600;700;800;900&display=swap");

@tailwind base;
@tailwind components;
@tailwind utilities;

/* 
  Note: The styles for this gradient grid background is heavily inspired by the creator of this amazing site (https://dub.sh) – all credits go to them! 
*/

.main {
  width: 100vw;
  min-height: 100vh;
  position: fixed;
  display: flex;
  justify-content: center;
  padding: 120px 24px 160px 24px;
  pointer-events: none;
}

.main:before {
  background: radial-gradient(circle, rgba(2, 0, 36, 0) 0, #fafafa 100%);
  position: absolute;
  content: "";
  z-index: 2;
  width: 100%;
  height: 100%;
  top: 0;
}

.main:after {
  content: "";
  background-image: url("/assets/images/grid.svg");
  z-index: 1;
  position: absolute;
  width: 100%;
  height: 100%;
  top: 0;
  opacity: 0.4;
  filter: invert(1);
}

.gradient {
  height: fit-content;
  z-index: 3;
  width: 100%;
  max-width: 640px;
  background-image: radial-gradient(
      at 27% 37%,
      hsla(215, 98%, 61%, 1) 0px,
      transparent 0%
    ),
    radial-gradient(at 97% 21%, hsla(125, 98%, 72%, 1) 0px, transparent 50%),
    radial-gradient(at 52% 99%, hsla(354, 98%, 61%, 1) 0px, transparent 50%),
    radial-gradient(at 10% 29%, hsla(256, 96%, 67%, 1) 0px, transparent 50%),
    radial-gradient(at 97% 96%, hsla(38, 60%, 74%, 1) 0px, transparent 50%),
    radial-gradient(at 33% 50%, hsla(222, 67%, 73%, 1) 0px, transparent 50%),
    radial-gradient(at 79% 53%, hsla(343, 68%, 79%, 1) 0px, transparent 50%);
  position: absolute;
  content: "";
  width: 100%;
  height: 100%;
  filter: blur(100px) saturate(150%);
  top: 80px;
  opacity: 0.15;
}

@media screen and (max-width: 640px) {
  .main {
    padding: 0;
  }
}

/* Tailwind Styles */

.app {
  @apply relative z-10 flex justify-center items-center flex-col max-w-7xl mx-auto sm:px-16 px-6;
}

.black_btn {
  @apply rounded-full border border-black bg-black py-1.5 px-5 text-white transition-all hover:bg-white hover:text-black text-center text-sm font-inter flex items-center justify-center;
}

.outline_btn {
  @apply rounded-full border border-black bg-transparent py-1.5 px-5 text-black transition-all hover:bg-black hover:text-white text-center text-sm font-inter flex items-center justify-center;
}

.head_text {
  @apply mt-5 text-5xl font-extrabold leading-[1.15] text-black sm:text-6xl;
}

.orange_gradient {
  @apply bg-gradient-to-r from-amber-500 via-orange-600 to-yellow-500 bg-clip-text text-transparent;
}

.green_gradient {
  @apply bg-gradient-to-r from-green-400 to-green-500 bg-clip-text text-transparent;
}

.blue_gradient {
  @apply bg-gradient-to-r from-blue-600 to-cyan-600 bg-clip-text text-transparent;
}

.desc {
  @apply mt-5 text-lg text-gray-600 sm:text-xl max-w-2xl;
}

.search_input {
  @apply block w-full rounded-md border border-gray-200 bg-white py-2.5 font-satoshi pl-5 pr-12 text-sm shadow-lg font-medium focus:border-black focus:outline-none focus:ring-0;
}

.copy_btn {
  @apply w-7 h-7 rounded-full bg-white/10 shadow-[inset_10px_-50px_94px_0_rgb(199,199,199,0.2)] backdrop-blur flex justify-center items-center cursor-pointer;
}

.glassmorphism {
  @apply rounded-xl border border-gray-200 bg-white/20 shadow-[inset_10px_-50px_94px_0_rgb(199,199,199,0.2)] backdrop-blur p-5;
}

.prompt_layout {
  @apply space-y-6 py-8 sm:columns-2 sm:gap-6 xl:columns-3;
}

/* Feed Component */
.feed {
  @apply mt-16 mx-auto w-full max-w-xl flex justify-center items-center flex-col gap-2;
}

/* Form Component */
.form_textarea {
  @apply w-full flex rounded-lg h-[200px] mt-2 p-3 text-sm text-gray-500 outline-0;
}

.form_input {
  @apply w-full flex rounded-lg mt-2 p-3 text-sm text-gray-500 outline-0;
}

/* Nav Component */
.logo_text {
  @apply max-sm:hidden font-satoshi font-semibold text-lg text-black tracking-wide;
}

.dropdown {
  @apply absolute right-0 top-full mt-3 w-full p-5 rounded-lg bg-white min-w-[210px] flex flex-col gap-2 justify-end items-end;
}

.dropdown_link {
  @apply text-sm font-inter text-gray-700 hover:text-gray-500 font-medium;
}

/* PromptCard Component */
.prompt_card {
  @apply flex-1 break-inside-avoid rounded-lg border border-gray-300 bg-white/20 bg-clip-padding p-6 pb-4 backdrop-blur-lg backdrop-filter md:w-[360px] w-full h-fit;
}

.flex-center {
  @apply flex justify-center items-center;
}

.flex-start {
  @apply flex justify-start items-start;
}

.flex-end {
  @apply flex justify-end items-center;
}

.flex-between {
  @apply flex justify-between items-center;
}
jsconfig.json
{
  "compilerOptions": {
    "paths": {
      "@*": ["./*"]
    }
  }
}
route.js
import Prompt from "@models/prompt";
import { connectToDB } from "@utils/database";

export const GET = async (request, { params }) => {
    try {
        await connectToDB()

        const prompt = await Prompt.findById(params.id).populate("creator")
        if (!prompt) return new Response("Prompt Not Found", { status: 404 });

        return new Response(JSON.stringify(prompt), { status: 200 })

    } catch (error) {
        return new Response("Internal Server Error", { status: 500 });
    }
}

export const PATCH = async (request, { params }) => {
    const { prompt, tag } = await request.json();

    try {
        await connectToDB();

        // Find the existing prompt by ID
        const existingPrompt = await Prompt.findById(params.id);

        if (!existingPrompt) {
            return new Response("Prompt not found", { status: 404 });
        }

        // Update the prompt with new data
        existingPrompt.prompt = prompt;
        existingPrompt.tag = tag;

        await existingPrompt.save();

        return new Response("Successfully updated the Prompts", { status: 200 });
    } catch (error) {
        return new Response("Error Updating Prompt", { status: 500 });
    }
};

export const DELETE = async (request, { params }) => {
    try {
        await connectToDB();

        // Find the prompt by ID and remove it
        await Prompt.findByIdAndRemove(params.id);

        return new Response("Prompt deleted successfully", { status: 200 });
    } catch (error) {
        return new Response("Error deleting prompt", { status: 500 });
    }
};
tailwind.config.js
/** @type {import('tailwindcss').Config} */
module.exports = {
  content: [
    './pages/**/*.{js,ts,jsx,tsx,mdx}',
    './components/**/*.{js,ts,jsx,tsx,mdx}',
    './app/**/*.{js,ts,jsx,tsx,mdx}',
  ],
  theme: {
    extend: {
      fontFamily: {
        satoshi: ['Satoshi', 'sans-serif'],
        inter: ['Inter', 'sans-serif'],
      },
      colors: {
        'primary-orange': '#FF5722',
      }
    },
  },
  plugins: [],
}
user.js
username: {
    type: String,
    required: [true, 'Username is required!'],
    match: [/^(?=.{8,20}$)(?![_.])(?!.*[_.]{2})[a-zA-Z0-9._]+(?<![_.])$/, "Username invalid, it should contain 8-20 alphanumeric letters and be unique!"]
  },