neg images collect
Opened this issue · 5 comments
thanks for your share, and how many neg images in your model,and did you collect neg images randomly?or other method?
I followed the method of the paper.
for 12-net, neg images are randomly cropped with various size, scale, position
for 24, 48-net, i used conventional hard neg mining method according to the paper
Best,
Gyeongsik Moon
Gyeongsik Moon
M.S. Candidate
Department of ECE, SNU, Seoul, Korea
http://cv.snu.ac.kr/ http://cv.snu.ac.kr/hmyeong/
- 2., 오후 5:32, Boyang Pan notifications@github.com 작성:
thanks for your share, and how many neg images in your model,and did you collect neg images randomly?or other method?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub #1
how many negative images in your train demo? as many as possible?
for 24,48-net, as many as possible
Gyeongsik Moon
M.S. Candidate
Department of ECE, SNU, Seoul, Korea
http://cv.snu.ac.kr/ http://cv.snu.ac.kr/hmyeong/
- 3., 오후 4:38, Boyang Pan notifications@github.com 작성:
how many negative images in your train demo? as many as possible?
—
You are receiving this because you commented.
Reply to this email directly or view it on GitHub #1 (comment)
Hi,
I am trying to reproduce your results on FDDB. I have a question regarding training the 24-net and 48-net.
First of all, I learn 12-net on ALFW (50k positives with flipes, negative 0.5M). This net works ok on FDDB, reach 0.9 on 350k false-posive,
Then, according to your test, I run hard-negative mining at images (exacly ImageNet validation set, 50k images). Here I collected 2.5M negative examples. After training 100 epochs like in your code, this net it not able to detect any other face different from training set. The score at FDDB is very low (0.2 on 14k negatives, with threshold = 0.1).
Analyzing it suggest that Net overfitt after 100 epoch of training data (especially Positive Data).
Do you have same situation?
How long did you train your nets, for 100 like presented at code?
What negative images did you use?
What are the threshold level for each net (12 and 24). This in you code are really low, so it look like resolving same task (binary classification) by each net independently, not using cascade structure. Or I am wrong?
Hi.
First of all, I followed the methods of paper entirely.
So, when I collect neg patches to train my 12-net, i used 200,000non face patches unlike yours(0.5M)
Anyway, after you train the 12-net, you have to make the recall rate and # of window like the table1. It`s quite a long time ago that i implemented this paper, but i remember my thresholds were very small(1e-2~1e-4).
I used coco dataset for neg db.
Gyeongsik Moon
M.S. Candidate
Department of ECE, SNU, Seoul, Korea
http://cv.snu.ac.kr/ http://cv.snu.ac.kr/hmyeong/
- 26., 오전 12:09, Bartosz Ludwiczuk notifications@github.com 작성:
Hi,
I am trying to reproduce your results on FDDB. I have a question regarding training the 24-net and 48-net.
First of all, I learn 12-net on ALFW (50k positives with flipes, negative 0.5M). This net works ok on FDDB, reach 0.9 on 350k false-posive,Then, according to your test, I run hard-negative mining at images (exacly ImageNet validation set, 50k images). Here I collected 2.5M negative examples. After training 100 epochs like in your code, this net it not able to detect any other face different from training set. The score at FDDB is very low (0.2 on 14k negatives, with threshold = 0.1).
Analyzing it suggest that Net overfitt after 100 epoch of training data (especially Positive Data).
Do you have same situation?
How long did you train your nets, for 100 like presented at code?
What negative images did you use?
What are the threshold level for each net (12 and 24). This in you code are really low, so it look like resolving same task (binary classification) by each net independently, not using cascade structure. Or I am wrong?—
You are receiving this because you commented.
Reply to this email directly or view it on GitHub #1 (comment)