lukasruff/Deep-SVDD-PyTorch

why use conv2d rather than ConvTranspose2d in decoder?

MarStarck opened this issue · 1 comments

I'm new in Pytorch, but I've found out in the autoencoder part, you use conv2d in decoder, I wonder why?

Also, if I increase epoch in training autoencoder, the AUC of ae will be larger than SVDD, does this mean ae is better than SVDD?

I agree that applying transposed convolutions using the ConvTranspose2d module in PyTorch is more appropriate for the decoder part of convolutional autoencoders.

I've updated the network architectures accordingly. Though there's not much of a difference in performance.