ZitongYu/DeepFAS

Do these methods work in reality?

fisakhan opened this issue · 4 comments

In my experience, none of these methods work in real-life situation/environment. Can you please provide implementation/code of a single anti-spoofing method that works on RGB image?

It somehow depends on your quality and amount of real-life training data...

I am not sure whether this works well
https://github.com/minivision-ai/Silent-Face-Anti-Spoofing

@ZitongYu MY quality? or the quality of the input image? Anyhow, the moment it becomes dependant on the quality of the input image, its a big failure. Because, someone who wants to spoof the system will definitely arrange a good quality image.

@ZitongYu MY quality? or the quality of the input image? Anyhow, the moment it becomes dependant on the quality of the input image, its a big failure. Because, someone who wants to spoof the system will definitely arrange a good quality image.

And that is when these systems work best.

@fisakhan Sorry for my bad English expression. What I mean is the diverse quality and large quantity of your training data. Quality can be treated as domains like compression, resolution, sensor ISP, low/high-fidelity attack, etc.