ainrichman/Peppa-Facial-Landmark-PyTorch

Faster model for smaller input image size?

kafan1986 opened this issue · 4 comments

Can you train the model for smaller image size? Say something like 64x64 or 96x96? I believe it will bring the computation requirement further and their are lots of use cases where smaller face data is present. 160x160 computation for edge devices is too much.

Thanks in advance!

Can you train the model for smaller image size? Say something like 64x64 or 96x96? I believe it will bring the computation requirement further and their are lots of use cases where smaller face data is present. 160x160 computation for edge devices is too much.

Thanks in advance!

There is no plan by myself. But community contribution is welcome. By the way, I have tested the model of this repo on Arm A53 CPU core, the average inference time is 12ms. I believe a model with only 26M Flops is sufficient for dealing with most of the edge cases.

Can you train the model for smaller image size? Say something like 64x64 or 96x96? I believe it will bring the computation requirement further and their are lots of use cases where smaller face data is present. 160x160 computation for edge devices is too much.
Thanks in advance!

There is no plan by myself. But community contribution is welcome. By the way, I have tested the model of this repo on Arm A53 CPU core, the average inference time is 12ms. I believe a model with only 26M Flops is sufficient for dealing with most of the edge cases.

I would have liked to train it myself but I only have an old mac laptop for development. So apart from Mac OS issue, non availability of GPU is also a factor.

My requirement is for inference in around 4 ms with 2 thread MNN inference. Currently it is around 11-12 ms. If someone can train here for image size 96x96 or 64x64, I would be eternally grateful.

Can you train the model for smaller image size? Say something like 64x64 or 96x96? I believe it will bring the computation requirement further and their are lots of use cases where smaller face data is present. 160x160 computation for edge devices is too much.
Thanks in advance!

There is no plan by myself. But community contribution is welcome. By the way, I have tested the model of this repo on Arm A53 CPU core, the average inference time is 12ms. I believe a model with only 26M Flops is sufficient for dealing with most of the edge cases.

How many epochs were required for training the shared pre-trained model?

Can you train the model for smaller image size? Say something like 64x64 or 96x96? I believe it will bring the computation requirement further and their are lots of use cases where smaller face data is present. 160x160 computation for edge devices is too much.
Thanks in advance!

There is no plan by myself. But community contribution is welcome. By the way, I have tested the model of this repo on Arm A53 CPU core, the average inference time is 12ms. I believe a model with only 26M Flops is sufficient for dealing with most of the edge cases.

How many epochs were required for training the shared pre-trained model?

For me, I trained the model for about 50+ epoch。