/attentionocr

Attention OCR in Tensorflow 2.0

Primary LanguagePythonMIT LicenseMIT

Attention OCR

A clear and maintainable implementation of Attention OCR in Tensorflow 2.0.

This sequence to sequence OCR model aims to provide a clear and maintainable implementation of attention based OCR.

Please note that this is currently a work in progress. Documentation is missing, but will be added when the code is stable.

This repository depends upon the following:

  • Tensorflow 2.0
  • Python 3.6+

Training a model

To train a model, first download the sources for generating synthetic data:

cd synthetic
./download_data_sources.sh

Next, in this project's root folder, run the training script:

python3 run.py

This will run a test training run. If everything went well, you'll find a file named "trained.h5" in your directory. To train a real model you should change the training parameters. See run.py its arguments to find out what is configurable.

python3 run.py --help

References

This work is based on the following work:

To do

  • Make image height variable
  • Name all input and output tensors
  • Write unit tests with full coverage
  • Show a test case on google colab
  • Perform a grid search on best parameters for a toy dataset
  • Document the whole API

Codacy Badge