Strange Inference time ?
Closed this issue · 2 comments
In your paper you wrote the inference time with Basic VSR++ is 0.072 seconds and i wonder how you get these values? That would lead in 13.9FPS and i never saw BasicVSR++ beeing so fast. So how do you come to only 0.072seconds for BasicVSR++ ?
And second question is: If this is true, then your model with 0.427s is nearly 6 times slower than the even very slow BasicVSR++
Is this really the case? 6 times slower than BasicVSR++ ?
For a fair comparison, we measured the average inference time through 100 independent executions for all compared models. The average runtime of BasicVSR++ was 0.072s, which is consistent with the 77ms claimed in the paper.
As you mentioned, the inference time of our FMA-Net is 0.427s, approximately 6 times slower than BasicVSR++. This is because BasicVSR++ is implemented with fast and lightweight convolution and warping operations only. However, unlike VSR, global feature mapping is required for deblurring, making our model relatively slower.
Also, please consider that the inference time may vary depending on the environment (GPU, OS, etc.)
I will close this issue as there has been no further discussion. Please re-open the issue if there are additional comments.