/chinese_ocr-1

yolo3 + densenet ocr

Primary LanguagePythonMIT LicenseMIT

chinese_ocr

yolo3 + densenet + ctc ocr

setup

see setup

dowon model

test

python demo.py

you can also see understand_detect

result

train

cd train

python train.py or you can use train_with_param to deal with different dataset

dataset format

 ---dataset
    --images
        --xxx.jpg
    --data_train.txt
    --data_test.txt

dataset

this dataset is generate by code.

link:https://pan.baidu.com/s/1JgS1gSRcfnjWF_epU-E2vA password:wigu

The dataset contains 800,000 pictures 300,000 from chinese novel
100,000 from random number 0-9
100,000 from random code
300,000 random selected by it's frequency

  • Random char space
  • Random font size
  • 10 different fonts
  • Blur
  • noise(gauss,uniform,salt_pepper,poisson)
  • ...

for more detial see train_with_param

Or you can use YCG09's dataset to train,url:

url:https://pan.baidu.com/s/1QkI7kjah8SPHwOQ40rS1Pw (passwd:lu7m)

put your dataset into train/images and change the label file data_test.txt data_train.txt

generate you own dataset

or you can generate your own dataset:

update

  1. use pretrain model to detect word

    • add demo √
    • add densenet training code √
    • test gpu nms √
    • generate my own dataset √
  2. add framework to easy train on your own dataset

    • add yolo3 train code
    • make the code can be easy use on other dataset

Reference

https://github.com/chineseocr/chineseocr https://github.com/YCG09/chinese_ocr