rag-evaluation
There are 14 repositories under rag-evaluation topic.
Giskard-AI/giskard
🐢 Open-Source Evaluation & Testing for AI & LLM systems
Marker-Inc-Korea/AutoRAG
AutoRAG: An Open-Source Framework for Retrieval-Augmented Generation (RAG) Evaluation & Optimization with AutoML-Style Automation
Agenta-AI/agenta
The open-source LLMOps platform: prompt playground, prompt management, LLM evaluation, and LLM Observability all in one place.
RomiconEZ/llamator
Framework for testing vulnerabilities of large language models (LLM).
oztrkoguz/RAG-Framework-Evaluation
This project aims to compare different Retrieval-Augmented Generation (RAG) frameworks in terms of speed and performance.
ioannis-papadimitriou/rag-playground
A framework for systematic evaluation of retrieval strategies and prompt engineering in RAG systems, featuring an interactive chat interface for document analysis.
AnasAber/MLflow_with_RAG
Using MLflow to deploy your RAG pipeline, using LLamaIndex, Langchain and Ollama/HuggingfaceLLMs/Groq
TajaKuzman/pandachat-rag-benchmark
PandaChat-RAG benchmark for evaluation of RAG systems on a non-synthetic Slovenian test dataset.
Gian207/RAG-lego-like-component
Proposal for industry RAG evaluation: Generative Universal Evaluation of LLMs and Information retrieval
jhaayush2004/RAG-Evaluation
Different approaches to evaluate RAG !!!
simranjeet97/Learn_RAG_from_Scratch_LLM
Learn Retrieval-Augmented Generation (RAG) from Scratch using LLMs from Hugging Face and Langchain or Python