HansRen1024/Easy-LAB

Runtime of the algorithm?

Closed this issue · 8 comments

Runtime of the algorithm?

@richipower 60FPS on NVIDIA TITAN X

but your code is targetting CPU no?? do you know how many frames it takes without CUDA?

@richipower
CPU: Intel® Xeon(R) CPU E5-2673 v3 @ 2.40GHz × 48
For model "WFLW_wo_mp", 2300ms per frame.
For model "WFLW_final", 6200ms per frame

Thank you :)

Hi @HansRen1024

I implemented it on windows with a GeForce GTX 1080 ti but it is running at 40ms per inference (without message passing), with size 256x256, do you have a different input size of the network? or how are you getting 60 FPS? did you do modifications on the main program?

Hi @richipower ,

I just used the original program without any modifications. I cannot remember clearly about this program, 60FPS may be provided by the author, not me. Besides, many points could effect the inference runtime, such as OpenBlas is faster than Atlas and the version of cudnn.

Hi @HansRen1024 I think you miss read? on the paper he says runtime is 60ms not 60 FPS ?!

@richipower I do not read the paper carefully, sorry about that.