accelerate inference
Opened this issue · 9 comments
Hello, I want to know whether it accelerate the inference. Recently,I try to accelerate the inference of siamrpn. I try to use fp16 instead of fp32. It is said that fp16 is twice as fast as fp32.
It accelerate the inference indeed. But the acceleration is not very obvious. My platform is pytorch and nvidia tx2
你TX2上能正常运行吗?我在TX2上跑GPU占用率很低
I run the code on TX2 and the GPU occupancy is very low,do you know why?
tx2上运行是不是不能使用libtorch,只能使用pytorch,那是不应该用.pth文件运行呢?
tx2上装pytorch都挺麻烦的,需要自行编译,装libtorch的话,你网上搜搜,我应该是在电脑上用的libtorch
这个我感觉速度挺慢的,代码没细看,单纯看跟踪的速度感觉都明显慢于pysot开源的pytorch下运行的。感觉没有充分利用gpu资源,可能只是适合于部署,并没有加速
感谢!
@JensenHJS Try using source-build libtorch instead of pre-compiled libtorch. I was struggling the performance problem on some other projects before, build from source solved the problem.