ee09115/spoofing_detection

Not successful with real faces!

Opened this issue · 8 comments

I found out that your model only works perfectly under the conditions where the face's brightness is normal or dark. I tried some real faces(like the attached photo) under light (a very normal case) and it was not successful. Do you have any solution for that problem?
46914955_283998478987418_6963235176251916288_n

Thank you for your observation. First, remember that the trained models for face presentation attack detection were trained using the replay-attack database (which have specific lighting conditions). For this PAD approach work in real-world applications you most likely have to train for the specific scenario you want to perform face PAD. However, I noticed the same thing (the face PAD method is very sensitive to the lighting changes) when I was trying to apply the trained replay-attack model for face PAD in real-world conditions. Due to the nature of the color spaces used (YCrCb and Luv) their luminance channels (Y and L), roughly speaking, carry the same information. Therefore, the classifier has the same information twice. That fact might be the reason why the classifier is very sensitive to the ambient lighting, and only work with proper lighting conditions. I want to study the influence of each histogram channel regarding the overall performance of the face PAD. Maybe removing one of the histograms regarding the luminance information will work better (or worst I have to test it). Another thing I tried when performing real-world tests, was to use a set of consecutive frames to decide between a bona fide image and an attack image (You may try to increase the "sample_number" variable to see the effect). Lastly, the decision threshold in the python script (line 101) is 0.7. This threshold is similar to the threshold obtained for the development set of the replay attack database see this figure. You may want to try other values for your specific problem.

I used the two models you provided for the actual test, many replay attacks cannot be defended, is that the reason for the model?

@sunmoonb I did not understand your question. Can you rephrase it?

@ee09115 Thank you for your reply. In my test, it often appears that the photo recognition is true.

@ee09115 he means the model recognize photo and video as real person.

@sunmoonb As I mentioned earlier, the models I published were trained using public databases, and for such databases the trained models work well. If you are trying to use these models in "real-world" for your application you most likely need to train for such conditions. Right now I am working on the generalization capabilities of the proposed method. However to use this approach for face PAD for your application you need to train it for such application.

@ee09115 Thank you for your reply and contribution.

@therealtuanbui please share detailed instructions to use this code. I am not able to run this code. How to detect spoofing from webcam or not video using this code?

Thank you in advance.