huawei-noah/Pretrained-IPT

Fine-tune Dataset for SR task

Guanyu-Lin opened this issue · 6 comments

Thank you for sharing this great repo!

Excuse me, I am not clear about your fine-tuning dataset for SR task. May you have not mention it in the paper or code repo?

I will appreciate it if you can help me. Thank you.

We use the DIV2K to fune-tune SR task

I want to ask in Fig. 6 of the paper, what is the task, dataset and setting when you compare the IPT with other CNNs.

I want to ask in Fig. 6 of the paper, what is the task, dataset and setting when you compare the IPT with other CNNs.

We compared them in Set 5 dataset for 2x SR.

It seems that the Fig. 6 in arxiv v1 and CVPR21 are different.

It seems that the Fig. 6 in arxiv v1 and CVPR21 are different.

Sorry for the mistake. We compared them in Set 5 for 2x SR in arxiv v1, while compared them in Urban100 for 2x SR in CVPR21 version (because the different in Set5 is too small).

How long does it usually take for fine-tuning (from ImageNet-pretrained model to DIV2K-finetuned model)?
How many GPU days and which GPU?