autonise/CRAFT-Remade

Model got after training

Closed this issue · 10 comments

After training with weak supervision with pre-trained model we are getting a .pkl file but how to convert to .pth?

.pkl is a image file or you have saved the model inside the file?

Hello @rakshanaa , sorry for the late response. The model has been saved inside the pkl file. torch internally uses python's pickle for serialization.

Thank you @mayank, can you tell how we can test our trained model with other unseen data or can you provide the method?
Because after i got the model i can load it on my own i do no what type of prediction we need to give?

import cv2
import torch

image = cv2.imread("image.jpg")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
torch_model = load_state_dict(torch.load('final_model.pkl', map_location='cpu')
torch_model.predict(image)
cv2.imshow("Test_Image", image)
cv2.waitKey(0)
cv2.destroyAllWindows()

i need to get bounding box for an image using the trained model? what prediction we can add to test and see?

Hello @rakshanaa can you please let me know the data format and how you trained the model
what is the annotation format you have used ?

Hi @saichandra1199 i have trained my model for .jpg image. Trained on weak supervision, data format: first test with polygon i do not get good result so change to rectangle.

ok @rakshanaa thanks,
Did you trained it on ICDAR data or your dataset ?
Is it working better with rectangle ?

hi @saichandra1199 yes for me it worked good on rectangle. But after testing on data it is not good as the model he given in the repo may be he have trained for more images.

ok @rakshanaa Thanks for information.
But one more thing I am unable to understand the annotations format while training a model. In what format should the annotations for the boxes should be given ?

Hi @saichandra1199
->First tag the datasets with rectangle it will give you some good result.
->i tagged with polygon first but no good result i changed to rectangle.
->Once tagging is done you need to create test and train set by using data_structure_ic13.py in that check the line number 54(icdar2013_test) the annots from you are getting the rect box value properly. if wrong change the annots = [[x[0], x[1]], [x[0], x[3]], [x[2], x[3]], [x[2], x[1]]] change the x index according to the dataset you have.

Thank you @mayank, can you tell how we can test our trained model with other unseen data or can you provide the method? Because after i got the model i can load it on my own i do no what type of prediction we need to give?

import cv2 import torch

image = cv2.imread("image.jpg") gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) torch_model = load_state_dict(torch.load('final_model.pkl', map_location='cpu') torch_model.predict(image) cv2.imshow("Test_Image", image) cv2.waitKey(0) cv2.destroyAllWindows()

i need to get bounding box for an image using the trained model? what prediction we can add to test and see?

Hello @rakshanaa , Please have a look at the synthesize function in main.py . It takes in a model path and folder where the images are kept as input and saves the predicted bounding box coordinates in a json file. It also generates the images with the bounding box.

Thank you @mayank-git-hub