abreheret/PixelAnnotationTool

resizing the color masks somehow messes the color detection

tanmaypandey7 opened this issue · 3 comments

hi, i have labelled around 300 images for my training dataset. unfortunately since i was getting a out of memory error i had to resize all of them including the masks. but when i run a pixel color detection script the resized image detects multiple labels whereas the original ones only show a single color.
note that all my images have 1 label only, i have no idea why it is doing so.
i'll attach some images for referenece.
original images :
2653005010666-1
3021690011258-10
3021760290057-1
3021760290422-1
3021760291047-1
2600438012149-1
resized ones:
2653005010666-1
3021690011258-10
3021760290057-1
3021760290422-1
3021760291047-1
2600438012149-1

while pixel_color_detecting the original images:
Screenshot (42)

while for resized:
Screenshot (43)

Your issue has nothing to do with Pixel Annotation tools (so I close the issue).
In addition :

  • you do not provide your resize script
  • I dont see the difference between the original image and resize image (the sizes are the same)

But in view of your images, it seems to me that you need (in your script to resize) use a CUBIC or LINEAR interpolation, and therefore the labels border are "averaged". It is necessary to use an interpolation of the NEAREST type.

since i am a noob at markdown here is the script:
just adding a imgur link to make code a bit understandable
https://imgur.com/K5vc8D1

thank you, dear sir, simply changing cv2.INTER_AREA TO cv2.INTER_NEAREST did the trick. I cannot thank you enough as you have saved me from labeling 500 images again.