Allowing Artificial inteligence to extract information and insights from uploaded PDF documents in real-time, based on your questions and converstaion.
- Typescript
- NextJS
- Tailwindcss
- Shadcn
- Planetscale PostGres Database
- Pinecone DB Vector Database
- Prisma ORM
- Tanstack & tRPC
- LangChain - parsing and enabling the vectorization of your document for LLM Context
- Kinde Authentication
- OpenAI: GPT-3 LLM
- Uploadthing - AWS S3 document upload abstraction layer
Upload PDF's | Manage, view and delete files |
---|---|
PDF reader functionality | Infinite message rendering |
---|---|
Experimenting with NextJS server components and routes.
Update state immediately upon a message being sent for maxium responsiveness and user experience. If there is an error in the message endpoint, or a failure loging to the db, rollback the state, saving the initial message condition so the user can immediately try again.
AI responses can be slow to complete. Instead of waiting for the full response to finish, it is streamed into the application in real time as it is being generated.
Your document conversations can be lengthy. Instead of rendering every single message when a document page is openned, only the first 10 messages and queried. Using a rolling limit, as you scroll up your chat window, previous messages are automatically rendered in as needed. Seamlessly increasing performance without interruping user needs.
Utilizing Kinde Auth for full Sign-in/sign-up account creating. Keeping your documents private and secure.
First clone the git repo locally
git clone git@github.com:devhmac/sift-ai.git
Install Dependencies
npm install
Environment Variables
- Refer to the exampleenv for required API and Library keys
Run the development server:
npm run dev
# or
yarn dev
# or
pnpm dev
# or
bun dev
Open http://localhost:3000 with your browser