Tutorial reference: https://github.com/alexeygrigorev/llm-rag-workshop
- Input: Questions are provided as input.
- Look-up Question: The questions are looked up in the database using Elastic Search.
- Provide Results: Different results are generated based on the search query.
- Prompt LLM: The language learning model (LLM) is prompted to answer the questions using the results.
- Output: The LLM provides answers based on the results.
So need to build:
- Database
- Promt/llm
- Orchestrator