PubTabNet is a large dataset for image-based table recognition, containing 568k+ images of tabular data annotated with the corresponding HTML representation of the tables. The table images are extracted from the scientific publications included in the PubMed Central Open Access Subset (commercial use collection). Table regions are identified by matching the PDF format and the XML format of the articles in the PubMed Central Open Access Subset. More details are available in our paper "Image-based table recognition: data, model, and evaluation".
04/May/2021
- Report for the ICDAR 2021 Scientific Literature Parsing competition available here.
21/July/2020
- PubTabNet 2.0.0 is released, where the position (bounding box) of non-empty cells is added into the annotation. The annotation file is also changed from json
format to jsonl
format to reduce the requirement on large RAM.
20/Jul/2020
- PubTabNet is used in ICDAR 2021 Competition on Scientific Literature Parsing (Task B on Table Recognition)
03/July/2020
- Image-based table recognition: data, model, and evaluation
is accepted by ECCV20.
01/July/2020
- Code of Tree-Edit-Distance-based Similarity (TEDS) metric is released.
In our paper, we proposed a new encoder-dual-decoder architecture, which was trained on PubTabNet and can accurately reconstruct the HTML representation of complex tables solely relying on image input. Due to legal constraints, the source code of the model will not be released.
The ground truth of test will not be release, as we want to keep it for a competition in the future. We will offer a service for people to submit and evaluate their results soon.
Images and annotations can be downloaded here. If you want to download the data from the command line, you can use curl or wget to download the data.
curl -o <YOUR_TARGET_DIR>/PubTabNet.tar.gz https://dax-cdn.cdn.appdomain.cloud/dax-pubtabnet/2.0.0/pubtabnet.tar.gz
wget -O <YOUR_TARGET_DIR>/PubTabNet.tar.gz https://dax-cdn.cdn.appdomain.cloud/dax-pubtabnet/2.0.0/pubtabnet.tar.gz
The annotation is in the jsonl (jsonlines) format, where each line contains the annotations on a given sample in the following format: The structure of the annotation jsonl file is:
{
'filename': str,
'split': str,
'imgid': int,
'html': {
'structure': {'tokens': [str]},
'cell': [
{
'tokens': [str],
'bbox': [x0, y0, x1, y1] # only non-empty cells have this attribute
}
]
}
}
@article{zhong2019image,
title={Image-based table recognition: data, model, and evaluation},
author={Zhong, Xu and ShafieiBavani, Elaheh and Yepes, Antonio Jimeno},
journal={arXiv preprint arXiv:1911.10683},
year={2019}
}
A Jupyter notebook is provided to inspect the annotations of 20 sample tables.
PubLayNet is a large dataset of document images, of which the layout is annotated with both bounding boxes and polygonal segmentations. The source of the documents is PubMed Central Open Access Subset (commercial use collection). The annotations are automatically generated by matching the PDF format and the XML format of the articles in the PubMed Central Open Access Subset. More details are available in our paper "PubLayNet: largest dataset ever for document layout analysis.", which was awarded the best paper at ICDAR 2019!