rishizek/tensorflow-deeplab-v3

The mask size differs from the input image size

Jayanth-L opened this issue · 1 comments

I understand that there is some resizing due to convolutions, can't i obtain mask with the same size as the input image ?

@Jayanth-L I think so, so I just added white frame removal using ImageChops (piece of code below) and then cv2.resize() to the end of the pipeline:

def trim(im):            
    bg = Image.new(im.mode, im.size, im.getpixel((0,0)))
    diff = ImageChops.difference(im, bg)
    diff = ImageChops.add(diff, diff, 2.0, -100)
    bbox = diff.getbbox()
    if bbox:
        return im.crop(bbox)