Unify_demos_RAG_Playground.mp4
RAG Playground is an application that allows you to interact with your PDF files using the Language Model of your choice.
Streamlit application that enables users to upload a pdf file and chat with an LLM for performing document analysis in a playground environment. Compare the performance of LLMs across endpoint providers to find the best possible configuration for your speed, latency and cost requirements using the dynamic routing feature. Play intuitively tuning the model hyperparameters as temperature, chunk size, chunk overlap or try the model with/without conversational capabilities. You find more model/provider information in the Unify benchmark interface.
- Visit the application: LangChain RAG Playground
- Input your Unify API Key. If you don’t have one yet, log in to the Unify Console to get yours.
- Select the Model and endpoint provider of your choice from the drop down. You can find both model and provider information in the benchmark interface.
- Upload your document(s) and click the Submit button
- Play!
The repository is located at RAG Playground Repository. To run the application locally, follow these steps:
- Clone the repository to your local machine.
- Set up your virtual environment and install the dependencies from
requirements.txt
:
python -m venv .venv # create virtual environment
source .venv/bin/activate # on Windows use .venv\Scripts\activate.bat
pip install -r requirements.txt
- Run rag_script.py from Streamlit module
python -m streamlit run rag_script.py
Name | GitHub Profile |
---|---|
Anthony Okonneh | AO |
Oscar Arroyo Vega | OscarAV |
Martin Oywa | Martin Oywa |