Difference in the PSNR values from pre-trained model and paper on Vid4 data
brmurali opened this issue · 12 comments
Thanks @Mukosame for wonderful work.
I tried running Zoom-Slo-Mo on Vid4 data.
And bellow are the PSNR results obtained using pre-trained model given in the repo:
20-07-29 12:24:23.998 - INFO: ################ Tidy Outputs ################
20-07-29 12:24:23.998 - INFO: Folder calendar - Average PSNR: 15.863634 dB PSNR-Y: 17.315718 dB.
20-07-29 12:24:23.998 - INFO: Folder city - Average PSNR: 20.775272 dB PSNR-Y: 22.207506 dB.
20-07-29 12:24:23.998 - INFO: Folder foliage - Average PSNR: 19.139918 dB PSNR-Y: 20.540553 dB.
20-07-29 12:24:23.998 - INFO: Folder walk - Average PSNR: 20.339251 dB PSNR-Y: 21.705512 dB.
20-07-29 12:24:23.998 - INFO: ################ Final Results ################
20-07-29 12:24:23.998 - INFO: Data: Vid4 - /home/ubuntu/Basavaraj/Zooming-Slow-Mo-CVPR-2020/test_example/Vid4/LR/*
20-07-29 12:24:23.999 - INFO: Padding mode: replicate
20-07-29 12:24:23.999 - INFO: Model path: ../experiments/pretrained_models/xiang2020zooming.pth
20-07-29 12:24:23.999 - INFO: Save images: False
20-07-29 12:24:23.999 - INFO: Flip Test: True
20-07-29 12:24:23.999 - INFO: Total Average PSNR: 19.029519 dB PSNR-Y: 20.442322 dB for 4 clips.
In paper, PSNR of 26.31 is mentioned.
But I got PSNR: 19.029519 dB and PSNR-Y: 20.442322 dB for 4 clips.
Then I tried changing the N_ot to 3, but no improvements.
For N_ot=3, I got PSNR: 19.579613 dB and PSNR-Y: 21.011176 dB for 4 clips.
I just want to confirm if my evaluation method is correct, and, also if you reported PSNR or PSNR-Y in the paper.
(data_mode = 'Vid4'),
scale = 4
N_ot = 7
flip_test = True
padding = 'replicate'
I also tried N_ot = 3 with slightly better results. Please kindly correct me if any of the settings is wrong.
Hi @brmurali , thanks for bringing up this issue. Our results are acquired when N_ot = 7. But still, PSNR_Y = 21 is still way too low. Would you like to post some visual results, or share more details about how do you get the input images?
Hi @brmurali , one more thing I just remember --- please also check your log to see if the index of images are sorted correctly. In my test script, I sort the files with this line: https://github.com/Mukosame/Zooming-Slow-Mo-CVPR-2020/blob/master/codes/test.py#L106 . But if your filename is different from mine, then most likely this line will give you wrong orderings and feed the model wrong sequences of data.
Hi @Mukosame , thanks much for the quick response.
I downloaded the Vid4 HR and LR data by bellow commands:
wget https://ge.in.tum.de/download/data/TecoGAN/vid3_LR.zip -O LR/vid3.zip
wget https://ge.in.tum.de/download/data/TecoGAN/vid4_HR.zip -O HR/vid4.zip
And for LR calendar category, I downloaded from TecoGAN repo : https://github.com/thunil/TecoGAN/tree/master/LR/calendar
Hi @brmurali , one more thing I just remember --- please also check your log to see if the index of images are sorted correctly. In my test script, I sort the files with this line: https://github.com/Mukosame/Zooming-Slow-Mo-CVPR-2020/blob/master/codes/test.py#L106 . But if your filename is different from mine, then most likely this line will give you wrong orderings and feed the model wrong sequences of data.
Sure!
I will look into that.
Hi @brmurali , one more thing I just remember --- please also check your log to see if the index of images are sorted correctly. In my test script, I sort the files with this line: https://github.com/Mukosame/Zooming-Slow-Mo-CVPR-2020/blob/master/codes/test.py#L106 . But if your filename is different from mine, then most likely this line will give you wrong orderings and feed the model wrong sequences of data.
Hi @Mukosame ,
I just put print statement after line: https://github.com/Mukosame/Zooming-Slow-Mo-CVPR-2020/blob/master/codes/test.py#L106 to check the list of filenames and bellow are the from output log:
'/home/ubuntu/Basavaraj/Zooming-Slow-Mo-CVPR-2020/test_example/Vid4/LR/walk/0001.png', '/home/ubuntu/Basavaraj/Zooming-Slow-Mo-CVPR-2020/test_example/Vid4/LR/walk/0002.png', '/home/ubuntu/Basavaraj/Zooming-Slow-Mo-CVPR-2020/test_example/Vid4/LR/walk/0003.png'
Hi @Mukosame,
Just FYI.
I've also started training on my local GPU (Nvidia GeForce RTX 2080 Ti) with batch-size of 12 due to memory limitation.
Now it is running at around 51000 iterations.
So, I just evaluated on 50000_G.pth model and bellow is the PSNR output:
Total Average PSNR: 19.907639 dB PSNR-Y: 21.354085 dB for 4 clips.
Hi @brmurali , the log should contain the order of filenames, please look in you ../results/Vid4/test_xxxx.log to check them. There are two possible problems:
- TecoGAN uses different ways to generate LR images. To avoid this issue, I suggest you to use our script: https://github.com/Mukosame/Zooming-Slow-Mo-CVPR-2020/blob/master/codes/data_scripts/generate_mod_LR_bic.py to generate the LR images.
- Wrong sequence, or mismatching between LR and HR images. The best way to tell if such issue happens is to check the log and the generated image to see what happened. You can change "save_imgs" to True in this line https://github.com/Mukosame/Zooming-Slow-Mo-CVPR-2020/blob/master/codes/test.py#L46 and check the outputs in ../results/Vid4 folder.
Thanks much @Mukosame!!
I will try to generate LR images using your scripts and then update the results.
Hi @Mukosame,
You were correct.
After generating LR images using your script, results are close to the one mentioned in the paper.
Total Average PSNR: 24.492724 dB and PSNR-Y: 26.004505 dB for 4 clips.
And thanks much for quick response and clarification on on LR images!!