shangwei5/D2Net

a question about model degradation

HaoDot opened this issue · 2 comments

Hi @shangwei5 , sorry to disturb you again.
As for Table 1 and Table 2, the first one is the comparison at normal setting, which means the input frames are all blurry as same as the previous works, and the latter table is the comparison at non-consecutively blurry setting, which means the input frames consist of sharp frames and blurry frames.
However, in spite of training all models in non-consecutively blurry dataset, the performance in Table2 is not as good as the performance in Table1. The performance gap between D2NET and other models is smaller at non-consecutively blurry setting, as shown in the fig below .
image
Chances are that at non-consecutively blurry setting, debluring is more challenging. Hope you can help me to understand this phenomenon better.

No, these two table are both on non-consecutively blurry setting:
The result in Table 1 is that we only count PSNR/SSIM of blur frames in non-consecutively blurry videos according to labels.
The result in Table 2 is that we count PSNR/SSIM of all frames in non-consecutively blurry videos.

Got it! Thx for your reply again.
So the experiment does prove three points below:

  • Targeted at blurry frames only, D2Net outperforms other models by utilizing neighbouring sharp frames, which means information from sharp frames nearby is useful.
  • Without deblurring sharp frames, D2Net still outperforms other models, which means model can just focus on severe blurry frames.
  • The non-consecutively blurry setting is practical and meaningful.