Mukosame/Zooming-Slow-Mo-CVPR-2020

Questions about Zooming Slow-mo validation on Vid4

aRrtTist opened this issue · 0 comments

I checked your source code and found that in test.py for the Vid4 dataset the processing is divided into 7 frames as GT to calculate the PSNR. is the PSNR of 26.31dB on the vid4 dataset obtained in the paper also calculated in this way, or is the input a whole long sequence?

I would like to try some improvement experiments based on Zooming Slow-Mo. I followed the example of Zooming Slow-mo in training the model, using the generate_mod_LR_bic.py downsampling to generate Vimeo LR training and using Vimeo-(Fast, Medium, Slow) and full-length Vid4 as the validation set. I found that as the number of iterations increased, the PSNR and SSIM of the model on the Vimeo test set were increasing, but the PSNR of Vid4 only increased for the first 70k-80k iterations, and then started to decrease all the way down. After the model completed 300k training, the metrics on Vimeo-(Fast, Medium, Slow) all surpassed Zooming Slow-Mo, but the psnr on vid4 dropped to 24.53 dB, lower than Zooming Slow-Mo's 26.31dB.
image

But if I follow the example of test.py and calculate the PSNR of 7 frames of images at a time, the result is again close to the thesis effect. I would like to know if this is normal? If it is abnormal, what should I do?