This software implements TextFuseNet: Scene Text Detection with Richer Fused Features in PyTorch. For more details, please refer to our paper https://www.ijcai.org/Proceedings/2020/72.
Arbitrary shape text detection in natural scenes is an extremely challenging task. Unlike existing text detection approaches that only perceive texts based on limited feature representations, we propose a novel framework, namely TextFuseNet, to exploit the use of richer features fused for text detection. More specifically, we propose to perceive texts from three levels of feature representations, i.e., character-, word- and global-level, and then introduce a novel text representation fusion technique to help achieve robust arbitrary text detection. The multi-level feature representation can adequately describe texts by dissecting them into individual characters while still maintaining their general semantics. TextFuseNet then collects and merges the texts’ features from different levels using a multi-path fusion architecture which can effectively align and fuse different representations. In practice, our proposed TextFuseNet can learn a more adequate description of arbitrary shapes texts, suppressing false positives and producing more accurate detection results. Our proposed framework can also be trained with weak supervision for those datasets that lack character-level annotations. Experiments on several datasets show that the proposed TextFuseNet achieves state-of-the-art performance. Specifically, we achieve an F-measure of 94.3% on ICDAR2013, 92.1% on ICDAR2015, 87.1% on Total-Text and 86.6% on CTW-1500, respectively.
This implementation is based on Detectron2, the installation can refer to step-by-step installation.txt. For more details about the environment of conda, please refer to requirements.txt.
A demo program can be found in demo. Before running the demo, download our pretrained models from Baidu Netdisk (Extraction code:8op1) or Google Driver. Set the path of files (include model, testing images, configs, output etc.) in demo/***_detection.py. Then launch demo by:
python demo/icdar2013_detection.py
Before training,please register your datasets in detectron2/data/datasets/builtin.py. Set training implementation details in configs/ocr/***.yaml. To train a model with 4 gpus,please run:
python tools/train_net.py --num-gpus 4 --config-file configs/ocr/icdar2013_101_FPN.yaml
The annotation example can be found in annotation_example. For word-level labels and character-level labels, please see corresponding details of weakly supervised learning method in our paper. For semantic segmentation labels, we generate it according to the masks of text instances during training, and for more details, please see corresponding code in seg_head.py.
Example results of TextFuseNet on different datasets.
Evaluation of TextFuseNet on different datasets with ResNet-101 backbone:
Datasets | Model | Recall | Precision | F-measure |
---|---|---|---|---|
tt | Paper | 85.3 | 89.0 | 87.1 |
tt | This implementation | 85.8 | 89.2 | 87.5 |
ctw | Paper | 85.4 | 87.8 | 86.6 |
ctw | This implementation | 85.1 | 89.7 | 87.4 |
ic13 | Paper | 92.3 | 96.5 | 94.3 |
ic13 | This implementation | 92.1 | 97.2 | 94.6 |
ic15 | Paper | 89.7 | 94.7 | 92.1 |
ic15 | This implementation | 90.6 | 94.0 | 92.2 |
ic19-ArT | This implementation | 72.8 | 85.4 | 78.6 |
@inproceedings{ijcai2020-72,
title={TextFuseNet: Scene Text Detection with Richer Fused Features},
author={Ye, Jian and Chen, Zhe and Liu, Juhua and Du, Bo},
booktitle={Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, {IJCAI-20}},
publisher={International Joint Conferences on Artificial Intelligence Organization},
pages={516--522},
year={2020}
}
The authors would like to thank the developers of PyTorch and Detectron2. See LICENSE for additional details.
Please let me know if you encounter any issues.