PRBonn/LocNDF

Reduce GPU Memory Usage

Closed this issue · 2 comments

Hi @louis-wiesmann,

Thanks a lot for publishing the code. I really like the approach of using NDFs as maps for robotics tasks. Especially for the small memory footprint, the quality of the map and localization is surprisingly good. I've just tested the pose tracking experiment on my computer: RTX 2070 Super, AMD Ryzen 7 3800X. I noticed that Torch's memory usage increases a lot after loading the maps you pre-trained. And immediately after starting pose tracking, it doubled again. I haven't measured it exactly. But my view on nvtop you can see here:

locndf_pt_nvtop

(The first increase to ~20% is just my Ubuntu idling). Maybe Torch simply allocates a lot of memory. Do you know a trick to limit this?

Thanks for the help in advance!

Best regards
Alex

Hey @aock,

One thing you could do is to not render the mesh. This is just for visualization but not used in the registration, therefore can be skipped. Another thing one can do is to reduce the "batch size" of how many points you process at the same time. I think if you put the whole point cloud at once through the network it will allocate more memory then putting one point after each other. But this could result in slower runtime. Another thing one can do is to simply donsample the point cloud. This usually does not affect that much the performance. I would just recommend to keep the original point coordinates and to not average points (e.g. in a voxel grid).

The nvtop plot was already created with mesh visualization disabled. I will try out reducing the "batch size" or the number of points and see how it effects the performance. Thanks a lot for your help!