explodinggradients/ragas
Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
PythonApache-2.0
Issues
- 0
Adapted output keys set(output.keys())={'深度', '相关性', '清晰度', '结构'} do not match with the original output keys: output_keys[i]={'structure', 'clarity', 'depth', 'relevance'}
#964 opened by qism - 2
TestsetGenerator -> RuntimeError: ... got Future <..> attached to a differen t loop
#963 opened by abetatos - 0
- 0
[R-223] Ragas Langfuse Integration is not working with latest version of Ragas
#893 opened by ayanray089 - 0
- 0
- 5
- 2
Random RuntimeError: Tool context error detected. This can occur due to parallelization
#957 opened by franck-cussac - 5
I want to know how Testset Generator to create dataset works. Can anyone give like flowchart or anything. I wanna know the high level understanding
#920 opened by sfc-gh-akashyap - 1
documentation on cosine similarity range is wrong
#923 opened by JunhaoWang - 2
- 0
- 0
- 0
Issue in Evaluation using local LLM
#955 opened by sheetalkamthe55 - 0
- 0
[R-248] setup Devin in Ragas
#918 opened by jjmachan - 1
[R-224] Ragas integration with Langfuse to trace both llm outputs and scores in the same place
#898 opened by databill86 - 0
- 0
- 2
- 0
Cant' run Quickstart
#943 opened by salvatoresaporito - 2
ExceptionInRunner: The runner thread which was running the jobs raised an exeception. Read the traceback above to debug it. You can also pass `raise_exceptions=False` incase you want to show only a warning message instead
#934 opened by sadaf0714 - 0
Evaluate() function gets unexpected arguments
#944 opened by nelagamy - 0
- 1
[R-247] Integrations: wandb and wandb tracer
#916 opened by jjmachan - 0
RAGAS compatibility with mistral models
#938 opened by 0Falli0 - 2
Duplicate reference
#903 opened by CvH2020 - 6
[R-228] Testset generation. TypeError: unsupported operand type(s) for -: 'str' and 'int'
#900 opened by GaalDorn1k - 0
Question of computing Context Relevancy
#928 opened by ShuangLI59 - 0
- 5
You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset
#873 opened by JPonsa - 2
- 2
- 0
value error: probabilities are less than 0
#922 opened by theoden8 - 2
Faithfulness vs. context recall
#913 opened by ishaan-mehta - 3
- 2
- 0
[R-233] How to generate a testset with VertexAI
#899 opened by jaymon0703 - 0
[R-232] Faithfullness metrics stability issue, non zero output for response with no statement
#878 opened by mukuls-zeta - 0
[R-231] `generate_text` in `LangchainLLMWrapper` ignores the value of the temperature parameter
#886 opened by HerrIvan - 1
- 2
[R-229] Automatic language adaptation bug
#890 opened by rere950303 - 0
[R-227] Add an MLFlow Integration
#910 opened by jjmachan - 1
[R-225] test issue
#908 opened by jjmachan - 2
[R-221] test issue
#905 opened by jjmachan - 1
[R-222] linear bug report
#906 opened by jjmachan - 0
AzureOpenAIEmbeddings with custom endpoint fails
#897 opened by a-romero - 0
Keep tract of LLM output
#896 opened by Yen444 - 0
- 1
Metrics answer_similarity and answer_correctness not working with VertexAIEmbeddings
#876 opened by deveshch