test img1 score: 0.55, test img2 score: 0.54
Closed this issue · 7 comments
i run the example many times, test img1 socre is 0.55, test img2 score is 0.54, but i think the quality of img1 is much higher than img2, is there something wrong?
Hi
as explained by Philipp Terhörst in this issue, the SER-FIQ score describes how well the network can handle the given input image. Therefore the score is not directly comparable to human perception.
The result for both images, at least with the pre-trained model "LResNet100E-IR,ArcFace@ms1m-refine-v2" used by us, should be about 0.89.
Which model did you use? And was MTCNN applied to the image?
Best regards,
Jan
thanks for you reply, i use MTCNN and use the pre-trained model "LResNet50E-IR,ArcFace@ms1m-refine-v2" get the score about 0.55. when i use the pre-trained model "LResNet100E-IR,ArcFace@ms1m-refine-v2", the score is about 0.89.
i dont understand the performance between the two models is similarity, why the score is so differnet?
Hi,
SER-FIQ determines the usefullness of an image for a specific face recognition model. This is done by calculating the robustness of the (model-dependend) embedding. Consequently, depending on the utilized model (in your case ResNet100 vs ResNet50), the robustness of these embeddings changes and thus, the quality score.
As mentionend in the issue from Jan, "SER-FIQ on ArcFace produces very narrow quality estimates. Although this narrow quality range is unconvienet, it is still meaningful! (If you take more than 2 decimal places into account). To get a more "natural" quality range, you can simply use scaling methods, such as MinMax normalization."
Therefore, scaling the quality scores from each model will result in more similar quality values.
If you have further questions, let us know.
Best
Philipp
I don't understand, why minmax norm? You don't have closed set, what's the max? what's the min?
Hi hegc,
you don't need to normalize the quality scores. Without normalization, the quality estimates are meaningful. E.g. you can apply thresholds to identifiy face images of low quality (usability of your recognition network).
However, depending on the utilized model, the range of the quality estimates vary. For some models, the range is between [0.790, 0.798], for others, it is between [0.3, 0.7]. Since some people feel uncomfortable about these range, you can simply transform them into another (more convient) range by an arbitrary normalization approach. For this, you might need some additional data to train your normalization on.
Best
Philipp
Hi @pterhoer > For some models, the range is between [0.790, 0.798], for others
If the quality value range is this narrow, does it mean all the faces are in good quality? How can it guarantee that the face quality values make sense in such narrow range?
Hi RyanCV,
the quality range is not important at all and can simply be rescaled to other ranged, such as [0,1].
For good quality estimates it is important that the ranking of these quality values makes sense.
That means that "bad" faces should always have lower quality estimates than "good" faces, independently of the overall quality range.
Best
Philipp