Newmu/dcgan_code

How to use this for image retrieval?

shaih82 opened this issue · 0 comments

I trained dcgan on my own dataset,
and now i want to use this net for image retrieval
now i have the generator network, that takes a length 100 encoding and transform it to an image
is there a simple way to reverse this process, so that i can give an image and get a length 100 encoding?
i tried to reverse the net like so

def gen_inv(X, w, g, b, w2, g2, b2, w3, g3, b3, w4, g4, b4, wx):
    x = dnn_conv(X, wx, subsample=(2, 2), border_mode=(2, 2))
    x = relu(batchnorm(x, g=g4, b=b4))
    x = dnn_conv(x, w4, subsample=(2, 2), border_mode=(2, 2))

    h3 = relu(batchnorm(x, g=g3, b=b3))
    h3 = dnn_conv(h3, w3, subsample=(2, 2), border_mode=(2, 2))

    h2 = relu(batchnorm(h3, g=g2, b=b2))
    h2 = dnn_conv(h2, w2, subsample=(2, 2), border_mode=(2, 2))

    h2 = T.flatten(h2,2)
    h = relu(batchnorm(h2, g=g, b=b))
    h = T.dot(h, w.T)
    return h

but got that for no matter what the input is, i get the same output
so i figured that batchnorm is in training mode, so i tried to remove batchnorm as well
but still got very weird results

any advice?

cheers,
SH