Anserini is an open-source information retrieval toolkit built on Lucene that aims to bridge the gap between academic information retrieval research and the practice of building real-world search applications. This effort grew out of a reproducibility study of various open-source retrieval engines in 2016 (Lin et al., ECIR 2016). See Yang et al. (SIGIR 2017) and Yang et al. (JDIQ 2018) for overviews.
If you've found Anserini to be helpful, we have a simple request for you to contribute back. In the course of replicating baseline results on standard test collections, please let us know if you're successful by sending us a pull request with a simple note, like what appears at the bottom of the Robust04 page. Replicability is important to us, and we'd like to know about successes as well as failures. Since the regression documentation is auto-generated, pull requests should be sent against the raw templates. In turn, you'll be recognized as a contributor.
A zero effort way to try out Anserini is to look at our online colab demo! Click "Open in Playground" and you'll be able to replicate our baselines from the TREC 2004 Robust Track just from the browser!
Main dependencies:
- Anserini was recently upgraded to Java 11 at commit
17b702d
(7/11/2019) from Java 8. Maven 3.3+ is also required. - Anserini was upgraded to Lucene 8.0 as of commit
75e36f9
(6/12/2019); prior to that, the toolkit uses Lucene 7.6. Based on preliminary experiments, query evaluation latency has been much improved in Lucene 8. As a result of this upgrade, results of all regressions have changed slightly. To replicate old results from Lucene 7.6, use v0.5.1.
After cloning our repo, build using Maven:
mvn clean package appassembler:assemble
The eval/
directory contains evaluation tools and scripts, including
trec_eval,
gdeval.pl,
ndeval.
Before using trec_eval
, unpack and compile it, as follows:
tar xvfz trec_eval.9.0.4.tar.gz && cd trec_eval.9.0.4 && make
Before using ndeval
, compile it as follows:
cd ndeval && make
Anserini is designed to support experiments on various standard TREC collections out of the box. Each collection is associated with regression tests for replicability. Note that these regressions capture the "out of the box" experience, based on default parameter settings.
- Regressions for Disks 1 & 2
- Regressions for Disks 4 & 5 (Robust04) [Colab demo]
- Regressions for AQUAINT (Robust05)
- Regressions for the New York Times (Core17)
- Regressions for the Washington Post (Core18)
- Regressions for Wt10g
- Regressions for Gov2
- Regressions for ClueWeb09 (Category B)
- Regressions for ClueWeb12-B13
- Regressions for ClueWeb12
- Regressions for Tweets2011 (MB11 & MB12)
- Regressions for Tweets2013 (MB13 & MB14)
- Regressions for Complex Answer Retrieval v1.5 (CAR17)
- Regressions for Complex Answer Retrieval v2.0 (CAR17)
- Regressions for Complex Answer Retrieval v2.0 (CAR17) with Doc2query expansion
- Regressions for the MS MARCO Passage Task
- Regressions for the MS MARCO Passage Task with Doc2query expansion
- Regressions for the MS MARCO Document Task
- Regressions for NTCIR-8 ACLIA (IR4QA subtask, Chinese monolingual)
Other experiments:
- Replicating "Neural Hype" Experiments
- Guide to running BM25 baselines on the MS MARCO Passage Task
- Guide to running BM25 baselines on the MS MARCO Document Task
- Guide to replicating document expansion by query prediction (Doc2query) results
- Guide to running experiments on the AI2 Open Research Corpus
- Experiments from Yang et al. (JDIQ 2018)
- Runbooks for TREC 2018: [Anserini group] [h2oloo group]
- Runbook for ECIR 2019 paper on axiomatic semantic term matching
- Runbook for ECIR 2019 paper on cross-collection relevance feedback
See this page for additional documentation.
- Use Anserini in Python via Pyserini!
- Anserini integrates with SolrCloud via Solrini!
- Anserini integrates with Elasticsearch via Elasterini!
- v0.6.0: September 6, 2019 [Release Notes][Known Issues]
- v0.5.1: June 11, 2019 [Release Notes]
- v0.5.0: June 5, 2019 [Release Notes]
- v0.4.0: March 4, 2019 [Release Notes]
- v0.3.0: December 16, 2018 [Release Notes]
- v0.2.0: September 10, 2018 [Release Notes]
- v0.1.0: July 4, 2018 [Release Notes]
- Jimmy Lin, Matt Crane, Andrew Trotman, Jamie Callan, Ishan Chattopadhyaya, John Foley, Grant Ingersoll, Craig Macdonald, Sebastiano Vigna. Toward Reproducible Baselines: The Open-Source IR Reproducibility Challenge. ECIR 2016.
- Peilin Yang, Hui Fang, and Jimmy Lin. Anserini: Enabling the Use of Lucene for Information Retrieval Research. SIGIR 2017.
- Peilin Yang, Hui Fang, and Jimmy Lin. Anserini: Reproducible Ranking Baselines Using Lucene. Journal of Data and Information Quality, 10(4), Article 16, 2018.
- Wei Yang, Haotian Zhang, and Jimmy Lin. Simple Applications of BERT for Ad Hoc Document Retrieval. arXiv:1903.10972, March 2019.
- Rodrigo Nogueira, Wei Yang, Jimmy Lin, and Kyunghyun Cho. Document Expansion by Query Prediction. arXiv:1904.08375, April 2019.
- Peilin Yang and Jimmy Lin. Reproducing and Generalizing Semantic Term Matching in Axiomatic Information Retrieval. ECIR 2019.
- Ruifan Yu, Yuhao Xie and Jimmy Lin. Simple Techniques for Cross-Collection Relevance Transfer. ECIR 2019.
- Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. End-to-End Open-Domain Question Answering with BERTserini. NAACL-HLT 2019 Demos.
- Ryan Clancy, Toke Eskildsen, Nick Ruest, and Jimmy Lin. Solr Integration in the Anserini Information Retrieval Toolkit. SIGIR 2019.
- Ryan Clancy, Jaejun Lee, Zeynep Akkalyoncu Yilmaz, and Jimmy Lin. Information Retrieval Meets Scalable Text Analytics: Solr Integration with Spark. SIGIR 2019.
- Jimmy Lin and Peilin Yang. The Impact of Score Ties on Repeatability in Document Ranking. SIGIR 2019.
- Wei Yang, Kuang Lu, Peilin Yang, and Jimmy Lin. Critically Examining the "Neural Hype": Weak Baselines and the Additivity of Effectiveness Gains from Neural Ranking Models. SIGIR 2019.
This research is supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada. Previous support came from the U.S. National Science Foundation under IIS-1423002 and CNS-1405688. Any opinions, findings, and conclusions or recommendations expressed do not necessarily reflect the views of the sponsors.