dwofk/fast-depth

Why did you do the processing in this way?

LulaSan opened this issue · 0 comments

Hi Diana and thank you for your work.
I am doing some real-time test with both webcam and RealSense D435i and I do the pre-process of the input image in the same way you did, but I obtain bad results compared with the ones of the test images.

def val_transform(self, rgb, depth):
               depth_np = depth
               transform = transforms.Compose([
                          transforms.Resize(250.0 / iheight),
                          transforms.CenterCrop((228, 304),
                          transforms.Resize(self.output_size),    
                ])
             rgb_np = transform(rgb)
             rgb_np = np.asfarray(rgb_np, dtype='float') / 255
        return rgb_np, depth_np

My input images are also 480,640 and I have already thought to convert them from BGR to RGB.
Can I ask you why did you apply this kind of pre-processing?
After the center crop,in fact, I obtain an image of (239,319,3) i.e. not square, so, doing the last resize, the image is being distorted a little bit, right?

Thank you in advance