TensorFlow placeholder error
robinsloan opened this issue · 8 comments
I'm very excited to see this project here -- I've been wondering about this technique ever since reading the paper!
I'm encountering a problem when I try to run main.py. It completes the first epoch of training, with this result…
Epoch: 1 Train costs: -0.670
Epoch: 1 Train KL divergence: -4.003
Epoch: 1 Train reconstruction costs: 0.000
…but then when it switches to testing, it halts at the first batch with this error:
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Train/Model/Placeholder' with dtype int32 and shape [32,5]
Have you encountered errors like this before? Any advice?
Thanks for your consideration!
Hi @robinsloan , thank you for starring this repo :-) Yes, I've also encountered this error on my two machines. One of my machine continued to run at both train and test time despite the error, while the other halted. I still couldn't figure out why. Plus, the code inside is still buggy, so very likely I'll rewrite the whole code based on tensorflow's seq2seq example before 12/17. Sorry for that bug.
@Chung-I have you modified the code to avoid error pointed by the @robinsloan because i am getting following error on during test time while running main.py
Epoch: 1 test Variational Lower Bound: -8589851.262
Epoch: 1 test KL divergence: -4416.019
Epoch: 1 test reconstruction costs: -334426.830
outputfile created
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Train/Model/Placeholder' with dtype int32 and shape [128,5]
[[Node: Train/Model/Placeholder = Placeholder dtype=DT_INT32, shape=[128,5], _device="/job:localhost/replica:0/task:0/cpu:0"]]
One more question, which version of tensorflow are you using?
@gourango01 sorry for that, but this repo has been abandoned, and I'm working on another repo implementing Variational Recurrent Autoencoder based on tensorflow's seq2seq example. I have a workable version of the repo in my local, but I haven't finished the docs yet, and currently the code on master is not runnable. I'll update that ASAP.
@Chung-I It looks like you've published something new -- this is exciting! I'm eager to try it out.
@robinsloan thanks for patiently waiting. I've updated the README.md. Please let me know if there's any bug.
@Chung-I when i use multiple buckets instead of one,for ex. "buckets": [[5,6],[10,11],[15,16],[20,21],[57,58]] then code works fine during training, even though placeholders for encoder_input and decoder_input is initialised corresponding to the largest bucket(in above ex. [57,58]) and code executes through the step() function in seq2seq_model.py. But things stop working for sample or interpolate mode and getting placeholder error, expecting input equivalent to the largest bucket size. For ex. for sentence of length 6: "link to apply for MS admissions", it maps to second bucket in above list and getting following error:
You must feed a value for placeholder tensor 'encoder10' with dtype int32
[[Node: encoder10 = Placeholder dtype=DT_INT32, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]]
@gourango01 Thanks for the bug report. I've pushed some code that should fixed the bug.
@Chung-I I wonder whether you have got this kind of error: FailedPreconditionError (see above for traceback): Attempting to use uninitialized value enc_embedding? And what is the main difference between this repo and the new one? I have taken a glance at that. Seems they have similar skeleton. Thanks!