misunderstading about Unet architecture in your work
hiterjoshua opened this issue · 3 comments
Thanks for your awesome reproduce work! while reading your code, I am a little curious about the number of Unet layers. According to your code in net.py, you use 14 layers Unet in your work:
layer_size= 7
self.layer_size = layer_size
self.enc_1 = PCBActiv(input_channels, 64, bn=False, sample='down-7')
self.enc_2 = PCBActiv(64, 128, sample='down-5')
self.enc_3 = PCBActiv(128, 256, sample='down-5')
self.enc_4 = PCBActiv(256, 512, sample='down-3')
for i in range(4, self.layer_size):
name = 'enc_{:d}'.format(i + 1)
setattr(self, name, PCBActiv(512, 512, sample='down-3')
It seems a little different from the paper, since the paper uses 16 layers totally, both encoder and decoder are 8 layers equally.
I am wandering if its your trick to train size 256*256 images? or its just a inadvertent error here? Thank you for your time.
its just a inadvertent error, thank you for pointing out.
Thanking you for your kind answer! and I want to know if you use multi GPUs training to speed up the training process, cause I do some improvement on your code to make use of nn.DataParallel and horovod and the training time is longer than your version. I am wandering if you tried before.
I haven't tried, sorry;