ChenyangLEI/deep-video-prior

Video Super-Resolution via Deep Video Prior

littlewhitesea opened this issue · 3 comments

Thanks for your excellent work!

I have two questions about the Deep Video Prior.

1、Have you attempted to do some experiments about VSR with the deep video prior? if yes, how about the performance?

2、In my opinion, the temporal consistency of output frames highly depends on the temporal consistency of input frames. If the input frames don't have good temporal consistency, can this deep video prior work?

Hope you can help me solve above problem.

Good questions.

  1. We attempted to do experiemnts for VSR but we do not conduct experiments systematically. However, we still observe that using U-Net for VSR as the network cannot produce satisfying performance like other tasks (e.g., colorization). We believe using a SOTA architecture of SR and SOTA loss might obtain better performance.

  2. Yes. We assume the input frames should be temporal consistent, which is a common feature of realistic videos. If the input frames are inconsistent, the performance will be degraded.

Thank you.

Thanks for your quick reply and detailed explaination.

You are welcome. BTW, when I mentioned that "using a SOTA architecture of SR and SOTA loss", it means that replacing the U-Net and perceptual loss of the model in this repo The framework of DVP is also appliable.