Is the generation text guided?
Closed this issue · 9 comments
First, thanks for your wonderful work. But I found that it directly outputs results without guide text when I testing the model I trained. And, each test sample in Flintstones dataset has 5 frames. I wonder how the test data are used, and where is the guide text.
I've already understood. I didn't notice before that there are two tasks to choose.
@lll-zy Great! You can refer to our paper to the task setting, feel free to address any further question
Thank you for your reply! But I still don't understand how test data is applied in visualization task. In the appendix of the paper, each ground truth image correspends to one text, and there are five texts and five outputs, right? However each sample in test dataset has five frames, they correspond to the same text. I don't understand what is the ground truth and its description when generating.
@lll-zy Hi, it is not correct that "However each sample in test dataset has five frames, they correspond to the same text.". If this happen, there must exist some bug in the implementation. Could you please share some case to me?
Maybe I misunderstood. I downloaded the flintstones dataset, which includes many .npy files. And the shape is (5, 128, 128, 3). In flintstones_annotations_v1. json, each .npy corresponds to a text.
@lll-zy yep, that's true, and each text should contains 5 captions. Each caption is corresponding to a frame
@lll-zy It makes sense because every frames is sampled from video, the caption is in "flintstones_annotations_v1-0.json", as you can see in:
ARLDM/data_script/flintstones_hdf5.py
Line 16 in 5b03fc4
You may debug our code to figure out how the dataset is organized.
I'll take a closer look. Thanks again for your answer.