/DocumentBinarization

Primary LanguagePythonMIT LicenseMIT

Two-Stage Generative Adversarial Networks for Binarization of Color Document Images

Figure1

A Pytorch implementation of Two-Stage Generative Adversarial Networks for Document Image Binarization described in the paper:

Abstract

Document image enhancement and binarization methods are often used to improve the accuracy and efficiency of document image analysis tasks such as text recognition. Traditional non-machine-learning methods are constructed on low-level features in an unsupervised manner but have difficulty with binarization on documents with severely degraded backgrounds. Convolutional neural network (CNN)––based methods focus only on grayscale images and on local textual features. In this paper, we propose a two-stage color document image enhancement and binarization method using generative adversarial neural networks. In the first stage, four color-independent adversarial networks are trained to extract color foreground information from an input image for document image enhancement. In the second stage, two independent adversarial networks with global and local features are trained for image binarization of documents of variable size. For the adversarial neural networks, we formulate loss functions between a discriminator and generators having an encoder--decoder structure. Experimental results show that the proposed method achieves better performance than many classical and state-of-the-art algorithms over the Document Image Binarization Contest (DIBCO) datasets, the LRDE Document Binarization Dataset (LRDE DBD), and our shipping label image dataset.

Models

The performance of each model

H-DIBCO 2016 FM p-FM PSNR DRD
Otsu 86.59 89.92 17.79 5.58
Niblack 72.57 73.51 13.26 24.65
Sauvola 84.27 89.10 17.15 6.09
Vo 90.01 93.44 18.74 3.91
He 91.19 95.74 19.51 3.02
Zhao 89.77 94.85 18.80 3.85
Ours 92.24 95.95 19.93 2.77
Evaluation of binarizationOCR accuracy in Levenshetin distance
Shipping Label FM p-FM PSNR DRD
Otsu 88.31 89.42 14.73 6.17
Niblack 86.61 89.46 13.59 6.61
Sauvola 87.67 89.53 14.18 5.75
Vo 91.20 92.92 16.14 2.20
He 91.09 92.26 16.03 2.33
Zhao 92.09 93.83 16.29 2.37
Ours 94.65 95.94 18.02 1.57
Shipping Label Total Korean Alphabet
Input Image 77.20 73.86 94.47
Ground Truth 84.62 85.88 96.66
Otsu 74.45 70.72 93.79
Niblack 69.00 66.31 82.94
Sauvola 72.84 68.81 93.73
Vo 77.14 74.69 89.86
He 75.15 72.45 89.13
Zhao 77.33 74.56 91.69
Ours 83.40 81.15 95.09

Prerequisites

  • Linux (Ubuntu)
  • Python >= 3.6
  • NVIDIA GPU + CUDA CuDNN

Installation

Model train/eval

(In the case of dibco)
python3 ./Common/make_ground_truth_dibco.py
python3 ./Common/make_ground_truth_512_dibco.py
  • Train a model per datasets
(In the case of Label)
1) sh ./Label/train_5_fold_step1.sh
2) sh ./Label/predict_for_step2_5_fold.sh
3) sh ./Label/train_5_fold_step2.sh
4) sh ./Label/train_5_fold_resize.sh
  • Evaluate the model per datasets
(In the case of Label)
sh ./Label/predict_step2_5_fold.sh
  • Trained weights
    • You can download trained weights from Dropbox. The link of the files as follows.
    • Link