Newmu/dcgan_code

using GAN and deconv with "valid" border mode

nikjetchev opened this issue · 0 comments

Hi, after playing with your code and getting very good results with it as it is, I am now looking to try other architectures and modify it. I want to try a model where there is no 0 padding and border_mode="valid" , instead of the current (2,2)

I have modified the deconv function from lib/ops.py to have the proper dimensions for the valid border mode, now called deconvV :

def deconvV(X, w, subsample=(1, 1), border_mode=(0, 0), conv_mode='conv'):
img = gpu_contiguous(X)
kerns = gpu_contiguous(w)
desc = GpuDnnConvDesc(border_mode=border_mode, subsample=subsample,
conv_mode=conv_mode)(gpu_alloc_empty(img.shape[0], kerns.shape[1], (img.shape[2]-1)_subsample[0]+kerns.shape[2], (img.shape[3]-1)_subsample[1]+kerns.shape[3]).shape, kerns.shape)
out = gpu_alloc_empty(img.shape[0], kerns.shape[1], (img.shape[2]-1)_subsample[0]+kerns.shape[2], (img.shape[3]-1)_subsample[1]+kerns.shape[3])
d_img = GpuDnnConvGradI()(kerns, img, out, desc)
return d_img

The code works with this operation, but I wonder whether it is correct. The GpuDnnConvGradI and GpuDnnConvDesc do not give an error even if I give some other values for the sizes, so I can be never sure whether I have a bug there or not.

I have also replaced the respective border mode in both the discriminator and generator, and taken care of all model dimensions so that it works. However, it runs for many iterations, sometimes looks as it learned something, but afterwards it just produces noise. Can it be I have made some error in my implementation of the deconv dimensions?

Or another explanation can be that the thus resulting architecture (without 0 padding) is much more unstable with respect to the collapse of the generator, as described by Tim Salimans in "Improved Techniques for Training GANs"

thanks a lot for your help