open-mmlab/mmfashion

landmarks are given when evaluating the category/attributes prediction?

zyue1105 opened this issue · 0 comments

it seems to me that the annotated landmarks are given as the input of the category/attributes prediction benchmark, which is a bit odd since we don't have the landmarks annotated in the real world https://github.com/open-mmlab/mmfashion/blob/master/mmfashion/apis/test_predictor.py#L101.

wanted to confirm that the evaluation results are from the annotated landmarks or the predicted landmarks https://github.com/open-mmlab/mmfashion/blob/master/docs/MODEL_ZOO.md?

btw, the landmark size needs to be changed to be compatible with the roi model https://github.com/open-mmlab/mmfashion/blob/master/demo/test_cate_attr_predictor.py#L44, which caused the problem in those issues https://github.com/open-mmlab/mmfashion/issues?q=is%3Aissue+is%3Aopen+invalid, and furthermore, the landmark needs to be predicted before passing to the model if I understand correctly.