hellozhuo/pidinet

Multicue dataset, set 1 set 2 set 3

wasaCheney opened this issue · 4 comments

Hi Zhuo,
Nice work, and it helps a lot! Thanks.

But, why does it has three different sets (e.g. train_pair_edge_set_1/2/3.lst) for Multicue dataset? Are they randomly split from the corresponding train_pair_edge.lst?

To get results in Table 7 (Multicue, boundary and edge), one should run the code separately with each of the three sets and then average their edge maps for evaluation? For their five GTs(1,2,3,4,5 ), which should be used for evaluation, or all of them?

Hi @wasaCheney , thanks for your interest in our work. Exactly as you mentioned, the three sets are randomly split as you said, forming three different train-eval partitions. In each set, we trained on the "train" partition, and evaluated on the "eval", getting three ODS results. The overall result is calculated by averaging the three results.

For the GT, I think there are only two annotations for each scene, one for "edge", and the other for "boundary". Since there are in total 100 scenes, the dataset gives 200 GTs. Probably you could get more information at #11 or https://serre-lab.clps.brown.edu/resource/multicue/.

Hi @zhuoinoulu , thanks for your kind instruction. Discussion #11 is helpful as well.

On the one hand, from the official website of Multicue, we can see that each labeled image has edge and boundary annotation. For edge annotation, we could find 6 different GTs (labeled as 1 2 3 4 5 6, as the image shows)
image

On the other hand, the multicue_pidinet (from #11 ), only contains one edge (as well as one boundary) annotation for each labeled image.
image

My question is what happens from 6 GTs to 1 GTs, an average or something else?

Hope make myself clear.

Hi @wasaCheney
The 6 GTs for a particular image are probably annotated by 6 annotators respectively, we should fuse those annotations into a single GT map for each image, please also refer to #11 for the Matlab code to generate the .mat GT files.

It's true, and thank you for your help. I will try the code.
Maybe, the issue could also be closed😁