/GPT-Chatbot

An open-source AI chatbot web-app built with React 18, Next.js, OpenAI, and Supabase.

Primary LanguageTypeScript

An open-source AI chatbot app built with React 18, Next.js, OpenAI, and Supabase.

Description · Features · Model Provider · Running locally ·


Description

This project strives to develop an AI chatbot assistant designed to assist users with a wide range of tasks and provide answers to their queries. The chatbot leverages the capabilities of the OpenAI API and service, enabling it to furnish users with valuable information and execute tasks on their behalf. The project is implemented in a TypeScript environment and hosted on Vercel. To ensure reliability and performance, the chatbot deployed on an edge runtime.

Link: https://chatbot-gpt4-lite.vercel.app/

chatbot-base.mp4

Features

Model Provider

This project comes with OpenAI gpt-4-0134 , gpt-3.5-turbo as the fallback option. With the help of the OpenAI sdk OpenAI SDK, we can initialize a stream for responses between the client and the api.

const messages = [{ "User" : "Who is the president of America?" }]

const response = await openai.createChatCompletion({
    model: "gpt-3.5-turbo",
    messages,
    temperature: 0.5,
    stream: true,
});

const stream = OpenAIStream(response);
return new StreamingTextResponse(stream);

Running locally

You will need to have the necessary environment variables setup in your .env file to run Next.js AI Chatbot. This include keys for your OpeinAI account, Supabase account, and Stripe account, Github Outh Client, Github Outh Secret.

OPENAI_API_KEY =
SUPABASE_URL =
SUPABASE_ANON_KEY =
STRIPE_PUBLISHABLE_KEY =
STRIPE_WEBHOOK_KEY =
STRIPE_SECRET_KEY =
GITHUB_CLIENT_ID =
GITHUB_CLIENT_SECRET =

Note: You should not commit your .env file or it will expose secrets that will allow others to control access to your various OpenAI and authentication provider accounts.

  1. Install run: pnpm i
  2. Make a new .env file.
  3. Populate the .env file with the necessary environment variables.
pnpm run build
pnpm run start
  1. In case you are listening to Stripe webhooks:
stripe login
stripe listen --forward-to localhost:3000/api/webhooks/stripe

Your app template should now be running on localhost:3000.

Running locally with docker

docker login
docker pull korebhaumik/gpt-4-lite:latest.
docker run -env-file .env -p 3000:3000 korebhaumik/gpt-4-lite

Note: If the docker image is not available (or repo is privated), you can build it locally by running docker build -t gpt-4-lite . in the root directory of the project.