Any chance to see more pre-trained models?
longwall opened this issue · 2 comments
Hello,
I played a little with the provided model with the weights ALAR_min_model_17_12_18.pth
As to the word min in the name I wonder ther are other models.
Do you plan to publish it?
I've thouigh out some tricks to improve the accuracy of coverage the htr text regions but it's still rough. In many cases important parts of letters are cropped out.
I don't have neither hardware nor labeled datasets for the trainig.
Could you share a more powerful model?
Hi,
The word "min" in the name refers to the size of the model file (we remove all data related to the training state and leave only the model weights).
We have no plans to release new models in the short term, as our hardware is limited and is currently used in other projects. Nonetheless, you can download several public available datasets (labeled at different levels) and train your own models. For instance, in our examples folder you can find the links to OHG, Bozen and cBAD datasets.
Regards,
Thanks for reply! I found a good trick in P2PaLA to improve accuracy of binding each segment.
Initially the usual algorythm works returning a polygon/
Then I turn the image on 180 and pass to the program again. It returns a new set of polygons but they are based on "overline" of the lines. Then all the coordinates of polygons of turned images are turned back and can be merged with the first set. The first set works fine with bottom of lines, the second one is good in upper border.
But anyway - the big problem of top sides of capital letters still remains. Almost all capital letters are cropped on their top.
If you can suggest any idea -please share. I see the only way, it's labeled dataset by "overlines" instead of baseline - there all capitla letters should be covered properly.