MIC-DKFZ/dynamic-network-architectures

Regarding the setting of transposed convolution in unet_decoder.py.

waxybywmyyfbk opened this issue · 1 comments

I hope this message finds you well. I've been studying your code. In the original code at dynamic-network-architectures/dynamic_network_architectures/building_blocks/unet_decoder.py on line 53, I noticed that both the kernel_size and stride parameters of the transposed convolution are set using the stride value of the corresponding stage in the encoder. This approach seems a bit different from the common practice, where we typically set the kernel_size of the transposed convolution to match the kernel_size of the corresponding stage in the encoder. I was wondering if this design choice was intentional or perhaps an oversight?

Thank you for your time and consideration. I look forward to your response.

See this: https://distill.pub/2016/deconv-checkerboard/

And this video which explains it nicely: https://www.youtube.com/watch?v=ilkSwsggSNM

Probably doesn't impact things a lot in practive but having non-overlapping kernels seemed nicer to me
Best,
Fabian