Use the new GPT-3.5 api to build a chatGPT chatbot The Algorithm ML Repo.
Tech stack used includes LangChain, Pinecone, Typescript, Openai, and Next.js. LangChain is a framework that makes it easier to build scalable AI/LLM apps and chatbots. Pinecone is a vectorstore for storing embeddings and your repo in text to later retrieve similar docs. Autodoc used to create markdown files and then embedded using this repo. Original autodoc conversion done by Sam Hogan at this link. This repo is a fork from gpt4-pdf-chatbot-langchain by Mayo.
Get in touch via twitter if you have questions
The visual guide of this repo and tutorial is in the visual guide
folder also originally created by Mayo.
If you run into errors, please review the troubleshooting section further down this page.
- Clone the repo
git clone [github https url]
- Install packages
pnpm install
- Set up your
.env
file
- Copy
.env.example
into.env
Your.env
file should look like this:
OPENAI_API_KEY=
PINECONE_API_KEY=
PINECONE_ENVIRONMENT=
PINECONE_INDEX_NAME=
- Visit openai to retrieve API keys and insert into your
.env
file. - Visit pinecone to create and retrieve your API keys, and also retrieve your environment and index name from the dashboard.
-
In the
config
folder, replace thePINECONE_NAME_SPACE
with anamespace
where you'd like to store your embeddings on Pinecone when you runpnpm run ingest
. This namespace will later be used for queries and retrieval. -
In
utils/makechain.ts
chain change theQA_PROMPT
for your own usecase. ChangemodelName
innew OpenAIChat
togpt-3.5-turbo
, if you don't have access togpt-4
. Please verify outside this repo that you have access togpt-4
, otherwise the application will not work with it.
This repo can load repo with multiple folders
-
Run Autodoc on your original repo and create markdown files for the entire repo. Save the results of autodoc as a separate repo, and provide the link in the script/ingest-data.ts file.
-
Run the script
npm run ingest
to 'ingest' and embed your docs. If you run into errors troubleshoot below. -
Check Pinecone dashboard to verify your namespace and vectors have been added.
Once you've verified that the embeddings and content have been successfully added to your Pinecone, you can run the app pnpm run dev
to launch the local dev environment, and then type a question in the chat interface.
In general, keep an eye out in the issues
and discussions
section of this repo for solutions.
General errors
- Make sure you're running the latest Node version. Run
node -v
- Ensure you have markdown files for each code file either generated using autodoc or similar tools.
Console.log
theenv
variables and make sure they are exposed.- Make sure you're using the same versions of LangChain and Pinecone as this repo.
- Check that you've created an
.env
file that contains your valid (and working) API keys, environment and index name. - If you change
modelName
inOpenAIChat
note that the correct name of the alternative model isgpt-3.5-turbo
- Make sure you have access to
gpt-4
if you decide to use. Test your openAI keys outside the repo and make sure it works and that you have enough API credits. - Check that you don't have multiple OPENAPI keys in your global environment. If you do, the local
env
file from the project will be overwritten by systemsenv
variable. - Try to hard code your API keys into the
process.env
variables.
Pinecone errors
- Make sure your pinecone dashboard
environment
andindex
matches the one in thepinecone.ts
and.env
files. - Check that you've set the vector dimensions to
1536
. - Make sure your pinecone namespace is in lowercase.
- Pinecone indexes of users on the Starter(free) plan are deleted after 7 days of inactivity. To prevent this, send an API request to Pinecone to reset the counter before 7 days.
- Retry from scratch with a new Pinecone project, index, and cloned repo.
Frontend of this repo is inspired by langchain-chat-nextjs