NPRF: A Neural Pseudo Relevance Feedback Framework for Ad-hoc Information Retrieval
If you use the code, please cite the following paper:
@inproceedings{li2018nprf,
title={NPRF: A Neural Pseudo Relevance Feedback Framework for Ad-hoc Information Retrieval},
author={Li, Canjia and Sun, Yingfei and He, Ben and Wang, Le and Hui, Kai and Yates, Andrew and Sun, Le and Xu, Jungang},
booktitle={Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing},
year={2018}
}
- Tensorflow
- Keras
- gensim
- numpy
To capture the top-k terms from top-n documents, one needs to extract the document frequency of each terms from index. Later, you are required to generate the similarity matrix between the query and document given the pre-trained word embedding (e.g. word2vec). Related functions can be found in preprocess/prepare_d2d.py.
We create two classes for the ease to training. The class Relevance incorporate the relevance information from the baseline and qrels file. The class Result simplify the write and read operation on TREC-formatted result file. Other information like query idf is dumped as a pickle file.
Configure the MODEL_config.py file, then run
python MODEL.py --fold fold_number temp_file_path
You need to run 5-fold cross valiation, which can be automatically done by running the runfold.sh script. The temp file is a temporary file to write the result of the validation set in TREC format.
After training, the evaluation result of each fold is retained in the result path as you specify in the MODEL_config.py file. One can simply run cat *res >> merge_file
to merge results from all folds. Afterwards, run the trec_eval script to evaluate your model.
Some snippets of the code follow the implementation of K-NRM, MatchZoo.