elasticsearch-ir-evaluator
is a Python package designed for easily calculating a range of information retrieval (IR) accuracy metrics using Elasticsearch and datasets. This tool is ideal for users who need to assess the effectiveness of search queries in Elasticsearch. It supports the following key IR metrics:
- Precision
- Recall
- Mean Reciprocal Rank (MRR)
- Mean Average Precision (MAP)
- Cumulative Gain (CG)
- Normalized Discounted Cumulative Gain (nDCG)
- False Positive Rate (FPR)
- Binary Preference (BPref)
These metrics provide a comprehensive assessment of search performance, catering to various aspects of IR system evaluation. The tool's flexibility allows users to select specific metrics according to their evaluation needs.
To install elasticsearch-ir-evaluator
, use pip:
pip install elasticsearch-ir-evaluator
- Elasticsearch version 8.11 or higher running on your system.
- Python 3.8 or higher.
The following steps will guide you through using elasticsearch-ir-evaluator
to calculate search accuracy metrics.
For more detailed and practical examples, please refer to the examples directory in this repository.
Configure your Elasticsearch client with the appropriate credentials:
from elasticsearch import Elasticsearch
es_client = Elasticsearch(
hosts="https://your-elasticsearch-host",
basic_auth=("your-username", "your-password"),
verify_certs=True,
ssl_show_warn=True,
)
Create and index a new corpus. You can customize index settings and text field configurations, including analyzers:
from elasticsearch_ir_evaluator import ElasticsearchIrEvaluator, Document
# Initialize the ElasticsearchIrEvaluator
evaluator = ElasticsearchIrEvaluator(es_client)
# Specify your documents
documents = [
Document(id="doc1", title="Title 1", text="Text of document 1"),
Document(id="doc2", title="Title 2", text="Text of document 2"),
# ... more documents
]
# Set custom index text field configurations
text_field_config = {"analyzer": "standard"}
evaluator.set_text_field_config(text_field_config)
# Create a new index or set an existing one
evaluator.set_index_name("your_index_name")
# Index documents with an optional ingest pipeline
evaluator.index(documents, pipeline="your_optional_pipeline")
Customize the search query template for Elasticsearch. Use {{question}}
for the question text and {{vector}}
for the vector value in QandA:
search_template = {
"query": {
"multi_match": {
"query": "{{question}}",
"fields": ["title", "text"],
}
},
"knn": [
{
"field": "vector",
"query_vector": "{{vector}}",
"k": 5,
"num_candidates": 100,
}
],
}
evaluator.set_search_template(search_template)
Use .calculate()
to compute all possible metrics based on the structure of the provided dataset:
# Load QA pairs for evaluation
qa_pairs = [
QandA(question="What is Elasticsearch?", answers=["doc1"]),
# ... more QA pairs
]
# Calculate all metrics
results = evaluator.calculate(qa_pairs)
# Output results
print(result.to_markdown())
This step involves a comprehensive evaluation of search performance using the provided question-answer pairs. The .calculate()
method computes all metrics that can be derived from the dataset's structure.
elasticsearch-ir-evaluator
supports progress logging to ensure that long-running indexing tasks can be safely interrupted and resumed. This feature is particularly useful for indexing large datasets or conducting extensive search evaluations, where the process might take an extended period.
When initiating indexing or evaluation processes, the tool automatically generates a log file named elasticsearch-ir-evaluator-log.json
in the current working directory. This log file contains vital information about the progress, including:
last_processed_id
: The ID of the last document that was successfully indexed or queried. This ensures that the process can resume from the exact point it was interrupted.processed_count
: The total number of documents that have been processed so far, providing a quick insight into the progress.index_name
: The name of the Elasticsearch index being used, allowing the process to resume with the correct index context.last_checkpoint_timestamp
: A timestamp marking the last update to the log file, offering a reference to when the process was last active.
Upon restart, elasticsearch-ir-evaluator
automatically detects the presence of the elasticsearch-ir-evaluator-log.json
file and uses the information within to resume operations from where they were left off. This mechanism ensures that no duplicate processing occurs and that every document is accounted for, streamlining the continuation of previously interrupted tasks.
This logging feature is designed with data integrity in mind. By recording the progress and using this data to resume operations, elasticsearch-ir-evaluator
minimizes the risk of incomplete evaluations or indexing, ensuring that the accuracy of IR metrics and the completeness of indexed datasets are maintained.
elasticsearch-ir-evaluator
is available under the MIT License.