Facilitating the design, comparison and sharing of deep text matching models.
MatchZoo 是一个通用的文本匹配工具包,它旨在方便大家快速的实现、比较、以及分享最新的深度文本匹配模型。
The goal of MatchZoo is to provide a high-quality codebase for deep text matching research, such as document retrieval, question answering, conversational response ranking, and paraphrase identification. With the unified data processing pipeline, simplified model configuration and automatic hyper-parameters tunning features equipped, MatchZoo is flexible and easy to use.
Tasks | Text 1 | Text 2 | Objective |
---|---|---|---|
Paraphrase Indentification | string 1 | string 2 | classification |
Textual Entailment | text | hypothesis | classification |
Question Answer | question | answer | classification/ranking |
Conversation | dialog | response | classification/ranking |
Information Retrieval | query | document | ranking |
To train a Deep Semantic Structured Model, import matchzoo and prepare input data.
import matchzoo as mz
train_pack = mz.datasets.wiki_qa.load_data('train', task='ranking')
valid_pack = mz.datasets.wiki_qa.load_data('dev', task='ranking')
predict_pack = mz.datasets.wiki_qa.load_data('test', task='ranking')
Preprocess your input data in three lines of code, keep track parameters to be passed into the model.
preprocessor = mz.preprocessors.DSSMPreprocessor()
train_processed = preprocessor.fit_transform(train_pack)
valid_processed = preprocessor.transform(valid_pack)
Make use of MatchZoo customized loss functions and evaluation metrics:
ranking_task = mz.tasks.Ranking(loss=mz.losses.RankCrossEntropyLoss(num_neg=4))
ranking_task.metrics = [
mz.metrics.NormalizedDiscountedCumulativeGain(k=3),
mz.metrics.NormalizedDiscountedCumulativeGain(k=5),
mz.metrics.MeanAveragePrecision()
]
Initialize the model, fine-tune the hyper-parameters.
model = mz.models.DSSM()
model.params['input_shapes'] = preprocessor.context['input_shapes']
model.params['task'] = ranking_task
model.params['mlp_num_layers'] = 3
model.params['mlp_num_units'] = 300
model.params['mlp_num_fan_out'] = 128
model.params['mlp_activation_func'] = 'relu'
model.guess_and_fill_missing_params()
model.build()
model.compile()
Generate pair-wise training data on-the-fly, evaluate model performance using customized callbacks on validation data.
train_generator = mz.PairDataGenerator(train_processed, num_dup=1, num_neg=4, batch_size=64, shuffle=True)
valid_x, valid_y = valid_processed.unpack()
evaluate = mz.callbacks.EvaluateAllMetrics(model, x=valid_x, y=valid_y, batch_size=len(pred_x))
history = model.fit_generator(train_generator, epochs=20, callbacks=[evaluate], workers=5, use_multiprocessing=False)
If you're interested in the cutting-edge research progress, please take a look at awaresome neural models for semantic match.
MatchZoo is dependent on Keras, please install one of its backend engines: TensorFlow, Theano, or CNTK. We recommend the TensorFlow backend. Two ways to install MatchZoo:
Install MatchZoo from Pypi:
pip install matchzoo
Install MatchZoo from the Github source:
git clone https://github.com/NTMC-Community/MatchZoo.git
cd MatchZoo
python setup.py install
-
DRMM: this model is an implementation of A Deep Relevance Matching Model for Ad-hoc Retrieval.
-
MatchPyramid: this model is an implementation of Text Matching as Image Recognition
-
ARC-I: this model is an implementation of Convolutional Neural Network Architectures for Matching Natural Language Sentences
-
DSSM: this model is an implementation of Learning Deep Structured Semantic Models for Web Search using Clickthrough Data
-
CDSSM: this model is an implementation of Learning Semantic Representations Using Convolutional Neural Networks for Web Search
-
ARC-II: this model is an implementation of Convolutional Neural Network Architectures for Matching Natural Language Sentences
-
MV-LSTM:this model is an implementation of A Deep Architecture for Semantic Matching with Multiple Positional Sentence Representations
-
aNMM: this model is an implementation of aNMM: Ranking Short Answer Texts with Attention-Based Neural Matching Model
-
DUET: this model is an implementation of Learning to Match Using Local and Distributed Representations of Text for Web Search
-
K-NRM: this model is an implementation of End-to-End Neural Ad-hoc Ranking with Kernel Pooling
-
CONV-KNRM: this model is an implementation of Convolutional neural networks for soft-matching n-grams in ad-hoc search
-
models under development: Match-SRNN, DeepRank, BiMPM ....
If you use MatchZoo in your research, please use the following BibTex entry.
@article{fan2017matchzoo,
title={Matchzoo: A toolkit for deep text matching},
author={Fan, Yixing and Pang, Liang and Hou, JianPeng and Guo, Jiafeng and Lan, Yanyan and Cheng, Xueqi},
journal={arXiv preprint arXiv:1707.07270},
year={2017}
}
Fan Yixing Core Dev |
Wang Bo Core Dev |
Wang Zeyi Core Dev |
Pang Liang Core Dev |
Yang Liu Core Dev |
Wang Qinghua Documentation |
Wang Zizhen Dev |
Su Lixin Dev |
Yang Zhou Dev |
Tian Junfeng Dev |
Please make sure to read the Contributing Guide before creating a pull request. If you have a MatchZoo-related paper/project/compnent/tool, send a pull request to this awesome list!
Thank you to all the people who already contributed to MatchZoo!
Jianpeng Hou, Lijuan Chen, Yukun Zheng, Niuguo Cheng, Dai Zhuyun, Aneesh Joshi, Zeno Gantner, Kai Huang, stanpcf, ChangQF, Mike Kellogg
- Jiafeng Guo
- Institute of Computing Technology, Chinese Academy of Sciences
- Homepage
- Yanyan Lan
- Institute of Computing Technology, Chinese Academy of Sciences
- Homepage
- Xueqi Cheng
- Institute of Computing Technology, Chinese Academy of Sciences
- Homepage
Copyright (c) 2015-present, Yixing Fan (faneshion)