An End-To-End Closed Domain Question Answering System.
pip install cdqa
git clone https://github.com/cdqa-suite/cdQA.git
cd cdQA
pip install -e .
Experiments have been done with:
- CPU 👉 AWS EC2
t2.medium
Deep Learning AMI (Ubuntu) Version 22.0 - GPU 👉 AWS EC2
p3.2xlarge
Deep Learning AMI (Ubuntu) Version 22.0 + a single Tesla V100 16GB.
To use cdQA
you need to create a pandas dataframe with the following columns:
title | paragraphs |
---|---|
The Article Title | [Paragraph 1 of Article, ... , Paragraph N of Article] |
The objective of cdqa
converters is to make it easy to create this dataframe from your raw documents database. For instance the pdf_converter
can create a cdqa
dataframe from a directory containing .pdf
files:
from cdqa.utils.converters import pdf_converter
df = pdf_converter(directory_path='path_to_pdf_folder')
You will need to install Java OpenJDK to use this converter. We currently have converters for:
- markdown
We plan to improve and add more converters in the future. Stay tuned!
You can download the models and data manually from the GitHub releases or use our download functions:
from cdqa.utils.download import download_squad, download_model, download_bnpp_data
directory = 'path-to-directory'
# Downloading data
download_squad(dir=directory)
download_bnpp_data(dir=directory)
# Downloading pre-trained BERT fine-tuned on SQuAD 1.1
download_model('bert-squad_1.1', dir=directory)
Fit the pipeline on your corpus using the pre-trained reader:
import pandas as pd
from ast import literal_eval
from cdqa.pipeline.cdqa_sklearn import QAPipeline
df = pd.read_csv('your-custom-corpus-here.csv', converters={'paragraphs': literal_eval})
cdqa_pipeline = QAPipeline(model='bert_qa_vCPU-sklearn.joblib')
cdqa_pipeline.fit_retriever(X=df)
If you want to fine-tune the reader on your custom SQuAD-like annotated dataset:
cdqa_pipeline = QAPipeline(model='bert_qa_vGPU-sklearn.joblib')
cdqa_pipeline.fit_reader('path-to-custom-squad-like-dataset.json')
To get the best prediction given an input query:
cdqa_pipeline.predict(X='your question here')
In order to evaluate models on your custom dataset you will need to annotate it. The annotation process can be done in 3 steps:
-
Convert your pandas DataFrame into a json file with SQuAD format:
from cdqa.utils.converters import df2squad json_data = df2squad(df=df, squad_version='v1.1', output_dir='.', filename='dataset-name')
-
Use an annotator to add ground truth question-answer pairs:
Please refer to our
cdQA-annotator
, a web-based annotator for closed-domain question answering datasets with SQuAD format. -
Evaluate the reader:
from cdqa.utils.evaluation import evaluate_reader evaluate_reader(dataset_file='path-to-annotated-dataset.json', prediction_file='predictions.json')
-
Evaluate a whole pipeline object:
from cdqa.utils.evaluation import evaluate_pipeline evaluate_pipeline(cdqa_pipeline, 'path-to-annotated-dataset.json')
We prepared some notebook examples under the examples directory.
You can also play directly with these notebook examples using Binder or Google Colaboratory:
Notebook | Hardware | Platform |
---|---|---|
[1] First steps with cdQA | CPU or GPU | |
[2] Using the PDF converter | CPU or GPU | |
[3] Training the reader on SQuAD | GPU |
Binder and Google Colaboratory provide temporary environments and may be slow to start but we recommend them if you want to get started with cdQA
easily.
You can deploy a cdQA
REST API by executing:
export dataset_path = 'path-to-dataset.csv'
export reader_path = 'path-to-reader-model'
FLASK_APP=api.py flask run -h 0.0.0.0
You can now make requests to test your API (here using HTTPie):
http localhost:5000/api query=='your question here'
If you wish to serve a user interface on top of your cdQA
system, follow the instructions of cdQA-ui, a web interface developed for cdQA
.
Read our Contributing Guidelines.
Type | Title | Author | Year |
---|---|---|---|
📹 Video | Stanford CS224N: NLP with Deep Learning Lecture 10 – Question Answering | Christopher Manning | 2019 |
📰 Paper | Reading Wikipedia to Answer Open-Domain Questions | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | 2017 |
📰 Paper | Neural Reading Comprehension and Beyond | Danqi Chen | 2018 |
📰 Paper | BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding | Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova | 2018 |
📰 Paper | Contextual Word Representations: A Contextual Introduction | Noah A. Smith | 2019 |
📰 Paper | End-to-End Open-Domain Question Answering with BERTserini | Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, Jimmy Lin | 2019 |
📰 Paper | Data Augmentation for BERT Fine-Tuning in Open-Domain Question Answering | Wei Yang, Yuqing Xie, Luchen Tan, Kun Xiong, Ming Li, Jimmy Lin | 2019 |
📰 Paper | Passage Re-ranking with BERT | Rodrigo Nogueira, Kyunghyun Cho | 2019 |
📰 Paper | MRQA: Machine Reading for Question Answering | Jonathan Berant, Percy Liang, Luke Zettlemoyer | 2019 |
📰 Paper | Unsupervised Question Answering by Cloze Translation | Patrick Lewis, Ludovic Denoyer, Sebastian Riedel | 2019 |
💻 Framework | Scikit-learn: Machine Learning in Python | Pedregosa et al. | 2011 |
💻 Framework | PyTorch | Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan | 2016 |
💻 Framework | PyTorch Transformers: A library of state-of-the-art pretrained models for Natural Language Processing (NLP) | Hugging Face | 2018 |