approximate Threshold T1 and T2
Closed this issue · 4 comments
Hi, I'm trying to reproduce the results in the paper. Can you share about the approximate threshold t1 and t2 in training steps and approximate negative samples in training cali24 and cali48 net? Also, have you tested on AFW ? I tried a lot methods but best AP is just about 90%. Thanks!
The threshold values I used are 0.05, 0.3, 0.3.
But I did not experiment on a lot of different combinations, so it's highly possible that they're not the best parameters.
Calibration nets do not require negative samples, or do you mean the total number of training samples?
I used roughly 1 million samples evenly split into 45 categories.
I did test on AFW, and the results were also not as great as in the paper.
Thanks! I'm sorry that I meant to ask about the negative samples in training fc24 and fc48. In my experiment, I can only get around ~50K for fc24. Another problem about training data. In the paper, the authors generate square annotations to approach the ellipse face annotations on AFLW without explaining how the rectangles are genrated. But the official dataset contains rectangles annotation and it's different from the example annotation in the paper. Do you use the annotation coming along with the dataset or generate it by yourself?
Yeah, if you do hard negative mining, it's hard to get more training samples than 50K.
I directly used the annotations provided by the dataset.
Thank you for your answers!