a little confused to bleu
hscspring opened this issue · 2 comments
I am a little confused to bleu calculation nltk.translate.bleu_score.sentence_bleu(reference, h, weight)
in which reference is save/realtest_coco.txt
and h is save/generator_sample.txt
,
is that somewhat demands on realtest_coco.txt
, size and contents?
Also , this sentence in the paper
In each step, it receives generator D’s high-level feature representation, e.g., the feature map of the CNN, and uses it to form the guiding goal for the W ORKER module in that timestep.
i am not sure whether it's "generator" or "discriminator".
btw, it's really a very nice job, so many thanks to you.
- The details can be found in the "Image Coco" subsection in the paper, we randomly choose 80000 sentences from Coco dataset as the training dataset, then other 5000 sentences are also randomly chosen as the test dataset(
save/realtest_coco.txt
). - CNN is discriminator and WORKER module belongs to the generator.
Thanks for your attention.
I hope the above can answer your questions.
i get it.
for '1', i've seen the details in the paper, as i am a beginner, i just, a little confused, no doubts :).
thanks for your answer and thank you so much..