In this tutorial, we'll see how to use LlamaIndex Instrumentation module to send intermediate steps in a RAG pipeline to the frontend for an intuitive user experience.
Full video tutorial under 3 minutes 🔥👇
We use Server-Sent Events which will be recieved by Vercel AI SDK on the frontend.
First clone the repo:
git clone https://github.com/rsrohan99/rag-stream-intermediate-events-tutorial.git
cd rag-stream-intermediate-events-tutorial
cd
into the backend
directory
cd backend
cp .env.example .env
OPENAI_API_KEY=****
poetry install
poetry run python app/engine/generate.py
poetry run python main.py
cd
into the frontend
directory
cd frontend
cp .env.example .env
npm i
npm run dev