Some questions about your code
BurnWan opened this issue · 3 comments
BurnWan commented
Line 185 in c60c536
In this line, I wonder why you used torch.flip to flip the weight. I think there is no need to flip the weights before implementing convolution in CNN.
ShenYujun commented
That is because we use convolution to replace the fully-connected layer in the official PGGAN.
BurnWan commented
That is because we use convolution to replace the fully-connected layer in the official PGGAN.
I see that, I mean you use torch.conv2d(x,weight), the result should be y = x[0]w[0]+x[1]w[1]+x[2]w[2]+x[3]w[3], instead of y=x[0]w[3]+x[1]w[2]+x[2]w[1]+x[3]w[0]. In this case, it is unnecessary to use the flip function.
sefa/models/pggan_discriminator.py
Line 336 in c60c536
ShenYujun commented
You are correct. We just want to make sure to get the same factorized results as using the official model.