- [August 3 2023]
ranx
0.3.16
is out!
This release adds support for importing Qrels and Runs fromparquet
files, exporting them aspandas.DataFrame
and save them asparquet
files. Any dependence ontrec_eval
have been removed to makeranx
truly MIT-compliant.
ranx ([raΕks]) is a library of fast ranking evaluation metrics implemented in Python, leveraging Numba for high-speed vector operations and automatic parallelization. It offers a user-friendly interface to evaluate and compare Information Retrieval and Recommender Systems. ranx allows you to perform statistical tests and export LaTeX tables for your scientific publications. Moreover, ranx provides several fusion algorithms and normalization strategies, and an automatic fusion optimization functionality. ranx also have a companion repository of pre-computed runs to facilitated model comparisons called ranxhub. On ranxhub, you can download and share pre-computed runs for Information Retrieval datasets, such as MSMARCO Passage Ranking. ranx was featured in ECIR 2022, CIKM 2022, and SIGIR 2023.
If you use ranx to evaluate results or conducting experiments involving fusion for your scientific publication, please consider citing it: evaluation bibtex, fusion bibtex, ranxhub bibtex.
NB: ranx is not suited for evaluating classifiers. Please, refer to the FAQ for further details.
For a quick overview, follow the Usage section.
For a in-depth overview, follow the Examples section.
- Hits
- Hit Rate
- Precision
- Recall
- F1
- r-Precision
- Bpref
- Rank-biased Precision (RBP)
- Mean Reciprocal Rank (MRR)
- Mean Average Precision (MAP)
- Discounted Cumulative Gain (DCG)
- Normalized Discounted Cumulative Gain (NDCG)
The metrics have been tested against TREC Eval for correctness.
Please, refer to Smucker et al., Carterette, and Fuhr for additional information on statistical tests for Information Retrieval.
You can load qrels from ir-datasets as simply as:
qrels = Qrels.from_ir_datasets("msmarco-document/dev")
A full list of the available qrels is provided here.
You can load runs from ranxhub as simply as:
run = Run.from_ranxhub("run-id")
A full list of the available runs is provided here.
Name | Name | Name | Name | Name |
---|---|---|---|---|
CombMIN | CombMNZ | RRF | MAPFuse | BordaFuse |
CombMED | CombGMNZ | RBC | PosFuse | Weighted BordaFuse |
CombANZ | ISR | WMNZ | ProbFuse | Condorcet |
CombMAX | Log_ISR | Mixed | SegFuse | Weighted Condorcet |
CombSUM | LogN_ISR | BayesFuse | SlideFuse | Weighted Sum |
Please, refer to the documentation for further details.
Please, refer to the documentation for further details.
python>=3.8
As of v.0.3.5
, ranx requires python>=3.8
.
pip install ranx
from ranx import Qrels, Run
qrels_dict = { "q_1": { "d_12": 5, "d_25": 3 },
"q_2": { "d_11": 6, "d_22": 1 } }
run_dict = { "q_1": { "d_12": 0.9, "d_23": 0.8, "d_25": 0.7,
"d_36": 0.6, "d_32": 0.5, "d_35": 0.4 },
"q_2": { "d_12": 0.9, "d_11": 0.8, "d_25": 0.7,
"d_36": 0.6, "d_22": 0.5, "d_35": 0.4 } }
qrels = Qrels(qrels_dict)
run = Run(run_dict)
from ranx import evaluate
# Compute score for a single metric
evaluate(qrels, run, "ndcg@5")
>>> 0.7861
# Compute scores for multiple metrics at once
evaluate(qrels, run, ["map@5", "mrr"])
>>> {"map@5": 0.6416, "mrr": 0.75}
from ranx import compare
# Compare different runs and perform Two-sided Paired Student's t-Test
report = compare(
qrels=qrels,
runs=[run_1, run_2, run_3, run_4, run_5],
metrics=["map@100", "mrr@100", "ndcg@10"],
max_p=0.01 # P-value threshold
)
Output:
print(report)
# Model MAP@100 MRR@100 NDCG@10
--- ------- -------- -------- ---------
a model_1 0.320α΅ 0.320α΅ 0.368α΅αΆ
b model_2 0.233 0.234 0.239
c model_3 0.308α΅ 0.309α΅ 0.330α΅
d model_4 0.366α΅α΅αΆ 0.367α΅α΅αΆ 0.408α΅α΅αΆ
e model_5 0.405α΅α΅αΆα΅ 0.406α΅α΅αΆα΅ 0.451α΅α΅αΆα΅
from ranx import fuse, optimize_fusion
best_params = optimize_fusion(
qrels=train_qrels,
runs=[train_run_1, train_run_2, train_run_3],
norm="min-max", # The norm. to apply before fusion
method="wsum", # The fusion algorithm to use (Weighted Sum)
metric="ndcg@100", # The metric to maximize
)
combined_test_run = fuse(
runs=[test_run_1, test_run_2, test_run_3],
norm="min-max",
method="wsum",
params=best_params,
)
Name | Link |
---|---|
Overview | |
Qrels and Run | |
Evaluation | |
Comparison and Report | |
Fusion | |
Plot | |
Share your runs with ranxhub |
Browse the documentation for more details and examples.
If you use ranx to evaluate results for your scientific publication, please consider citing our ECIR 2022 paper:
BibTeX
@inproceedings{ranx,
author = {Elias Bassani},
title = {ranx: {A} Blazing-Fast Python Library for Ranking Evaluation and Comparison},
booktitle = {{ECIR} {(2)}},
series = {Lecture Notes in Computer Science},
volume = {13186},
pages = {259--264},
publisher = {Springer},
year = {2022},
doi = {10.1007/978-3-030-99739-7\_30}
}
If you use the fusion functionalities provided by ranx for conducting the experiments of your scientific publication, please consider citing our CIKM 2022 paper:
BibTeX
@inproceedings{ranx.fuse,
author = {Elias Bassani and
Luca Romelli},
title = {ranx.fuse: {A} Python Library for Metasearch},
booktitle = {{CIKM}},
pages = {4808--4812},
publisher = {{ACM}},
year = {2022},
doi = {10.1145/3511808.3557207}
}
If you use pre-computed runs from ranxhub to make comparison for your scientific publication, please consider citing our SIGIR 2023 paper:
BibTeX
@inproceedings{ranxhub,
author = {Elias Bassani},
title = {ranxhub: An Online Repository for Information Retrieval Runs},
booktitle = {{SIGIR}},
pages = {3210--3214},
publisher = {{ACM}},
year = {2023},
doi = {10.1145/3539618.3591823}
}
Would you like to see other features implemented? Please, open a feature request.
Would you like to contribute? Please, drop me an e-mail.
ranx is an open-sourced software licensed under the MIT license.