sugyan/tf-dcgan

What's the format of the 'train_image'?

wangguanan opened this issue · 5 comments

Firstly, Thanks the author for your code, it's very easy to read and understand.
However, when I running ion the MNIST dataset, I got an error and confused with the train_images. MNIST was loaded with batch size 128 and reshaped to [128, 28, 28, 1]. Then I started main code with the reshaped data just like demo shown in "README".
Finally i got the error “ValueError: Trying to share variable d/conv1/conv2d/kernel, but specified shape (5, 5, 1, 64) and found shape (5, 5, 3, 64).”
I have try all my best but the problem is still not be solved, can anyone help me?

image
image

When I random a train data set with size (128, 64, 64, 3), The problem was solved. So I thought that the code only accept some specified data sizes. What i'm doing is to understand the code, and fine tune it to my experiment. Thank for author's nice code again.

Sorry for the late reply.
Certainly, I wrote this code assuming data of RGB color image (64, 64, 3) as input.
I would like to improve the code so that it accepts data of different sizes such as MNIST.
Thanks!

It's OK. May I have one more question?
What does the 15th line mean? 'outputs = tf.layers.dense(inputs, self.depths[0] * self.s_size * self.s_size)'
it equal to a [linear fully connect work ] which map the input with size [batch_size, feature_size] to size [batch_size, 1024 * 4 * 4 ] ?

Got it! Thank you!