VISION-SJTU/PillarNet-LTS

Running inference in FP16

YoushaaMurhij opened this issue · 1 comments

Nice work!
I tried to run the model in inference mode. Obviously, it is faster than CP-VoxelNet.
I am interested in running inference in FP16. I am struggled with converting input data to Float 16.
Could you please tell me which input variables should be converted to half precious beside the model itself?

Thanks,
Youshaa

The lossless pillarization still need be changed, you can modify the related CUDA code for your need.