Lose the tracking and the detection when the face is oriented
Opened this issue · 1 comments
Hello Yin,
I have used the project to train my dataset from scratch, as you know my object is the eye region detection not the face detection.
The pre-trained model that you gave me has a 68 key points as an output in the last layer ( logits/BiasAdd), but I need 40 key points
I have asked you , how can I modify the number of the units in the logits layer and how can I apply the transfer learning technique?
But you didn't give any answer, so I have tried to train my dataset from scratch and the result was good but, when I oriented my face, the tracking and the detection was lost.
In the documentation, researchers say that they have used two levels of CNN ( one for prediction and the other for orientation and scaling ).
Can you help me to find the origin of this issue?
Is it an issue from the scratch model or from annotation?
I fixed the annotation landmarks of images, but when I trained and I predict the results with video after frozing the model, I have got false results instead of the prediction of images that was good.
Why the results of predictions on images is totally different from prediction on video?