argman/EAST

Can I detect the rotation of the text? If so, how do I do that?

joaossmacedo opened this issue · 7 comments

  • I have trained EAST on my own dataset, and in this dataset we have text on multiple orientations(upright, 90º, -90º, upside down);
  • The model is detecting the text no matter the orientation;
  • After gathering the output, I crop the image and then pass it to a recognition model;
  • But the recognition model only recognises text in the correct orientation;
  • How do I get the angle rotation of the box, so I can rotate the cropped image to be in the correct orientation?

Useful:

  • I know there is "angle_map" on the model function, but I need to have this info after the detection function;

@joaossmacedo Can you please share your github repo for multiple orientations model, I need that for my project

I've found a solution that works but it's sub-optimal.

First of all, there is a limitation. It will only work if the angle is 0º, 90º, 180º or 270º.
Secondly, it will increase the process duration.

The idea

  1. Detect boxes;
  2. Crop the image according to a box;
  3. Check if height > width. If it's rotate 90º to make the text horizontal;
  4. Run the cropped image through the recognition model;
  5. If the score is low, rotate the image 180º;
  6. Run through the recognition model;
  7. Compare to the results and use the better one;

The code

boxes = detect(detection_model, img, 0.7)

img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

for box in boxes:
    cropped_image = crop_image(img, box)

    if cropped_image.size == 0:
        continue

    if cropped_image.shape[0] > cropped_image.shape[1]:
        cropped_image = cv2.rotate(cropped_image, cv2.ROTATE_90_CLOCKWISE)

    predict, probability = recognize(recognition_model, cropped_image)

    if probability < 0.8:
        cropped_image = cv2.rotate(cropped_image, cv2.ROTATE_180)
    
        new_predict, new_probability = recognize(recognition_model, cropped_image)
    
        if new_probability > probability:
            predict = new_predict
            probability = new_probability

Alternative idea

One idea that we explored but didn't end up using was to run through Tesseract to get it's angle. We decided not to use this method because we would need to add Tesseract to our project and, in our case, it was faster to recognize than to detect.

@joaossmacedo does the EAST model out of the box detect text region of any orientation or you had to make some changes to do it ?

Currently, I have trained it on some synthetic images for about 15k steps now and it doesn't seems to detect in all the orientations. I started from resnet checkpoint. I don't think amount of data is a problem since the training data is about 800,000 samples. Do I just keep training for more steps ?

In the project that I used EAST on, the data was also synthetic but only had text on 0º, 90º, 180º and 270º degrees. It was able to detect the text on all of those orientations.

I didn't use any previous checkpoint so I can't comment on that specifically. However, I believe it should be able to detect text in all orientations as evidenced by the images on the README.

I'm sorry I couldn't be more helpful.

@joaossmacedo Thanks. That's makes sense. I'll poke around a bit more.

Here are couple of results using eval.py script on my trained model. It's very basic one using all defaults. (no changes made).
The bounding boxes looks like they are not rotated. My guess is eval.py does not make use of angle information to rotate and plot the bounding boxes ?

pics are from icdar15 test set -

image

image

hi, i also checked rotation, it dosn't have rotation correction in text detection, i think for adding this option, you should have box of every alphabet ,
also for some text on red background it didnt work well.