The input and output image sizes are not identical
beargolden opened this issue · 3 comments
Hello, Reza,
Thank you for your wonderful BCDUnet_DIBCO project. I found that the sizes of the input and output images are not identical. Maybe that bug lies in the extract_ordered_overlap or recompone_overlap. Please fix it, and thanks again~
Dear Beargolden,
Thanks for your interest. In the evaluation code, I used stride_h and stride_w for moving a window across the image for choosing the patch, it may miss some pixels from the right and bottom of the image. Generally speaking, with stride 1 it will cover all the pixels and size will be identical to input image size but it will increase the computation time. It is not hard to fix it, I will fix it as soon as I have free time.
One simple approach is to add zero paddings to the image and get rid of it after prediction. This way you dont need to modify any function. for example:
Test_image with size 512*400
New_test = np.zeros((Test_image.shape[0]+stride_h, Test_image.shape[1]+stride_w))
New_test[0:Test_image.shape[0], 0: Test_image.shape[1]] = Test_image
with above code we added zero padding to bottom and right side of the image
now apply the patch and estimation then get the desired region. for example:
result = estimated[0:Test_image.shape[0], 0: Test_image.shape[1]]
It should produce an identical image.
Dear Beargolden,
Thanks for your interest. In the evaluation code, I used stride_h and stride_w for moving a window across the image for choosing the patch, it may miss some pixels from the right and bottom of the image. Generally speaking, with stride 1 it will cover all the pixels and size will be identical to input image size but it will increase the computation time. It is not hard to fix it, I will fix it as soon as I have free time.
One simple approach is to add zero paddings to the image and get rid of it after prediction. This way you dont need to modify any function. for example:
Test_image with size 512*400
New_test = np.zeros((Test_image.shape[0]+stride_h, Test_image.shape[1]+stride_w))
New_test[0:Test_image.shape[0], 0: Test_image.shape[1]] = Test_image
with above code we added zero padding to bottom and right side of the image
now apply the patch and estimation then get the desired region. for example:
result = estimated[0:Test_image.shape[0], 0: Test_image.shape[1]]It should produce an identical image.
Hello, Reza,
Thank you for your instruction. It can temporarily solve the problem, but the performance of resulting binary images is not as high as computed in Evaluate.py~
Merry Christmas!
sorry for the late replay, generally speaking adding some pixels around the corner of the image should not change the performance much, check your code again.