zwx8981/LIQE

About obtaining the multi-task Labels in NR datasets

Closed this issue · 4 comments

teowu commented

Hi Weixia,

A nice work on Vision-Language-based QA!

I may have question to ask: how are these scene labels and degradation type labels obtained, especially from the no-reference datasets (KoNiQ, CLIVE)? As far as I am concerned, these will need a large amount of human annotation.

Best,
Haoning

Hi Haoning, thanks for your interest. As you mentioned, we ask a group of subjects to annotate the scene labels and the dominant ditortion types of images. The workload is not that heavy because the sizes of IQA databases are much smaller than those of other vision tasks.

teowu commented

Thank you Weixia. This is quite a contribution as I think these labels can also benefit future methods.
May I know whether the labels are done by one annotator each image or a voting of multiple annotators?

Thanks for appreciating our work Haoning. The label of an image is obtained from a voting of multiple annotators.

teowu commented

Thanks for your efforts. That will be much more reliable.