xthan/VITON

Regarding Activation Function

akshay951228 opened this issue · 1 comments

leaky relu work great compare to relu , but in vition stage1 network for encoder activation function leaky rely and for decoder expect last layer activation relu.
can I know reason why relu for decoder network not leaky relu.
I have googled a lot , but didnot find any good explanation

xthan commented

This is just a design choice. I do not think leaky ReLu and ReLU have significant performance difference. I followed the code of pix2pix for a fair comparison. Using all ReLUs should also be fine --- BigGAN: https://arxiv.org/pdf/1809.11096.pdf uses all ReLUs, while StyleGAN http://stylegan.xyz/paper uses some leaky ReLUs.