thien94/ORB_SLAM2_CUDA

Could you explain which part of SLAM was enhanced by GPU?

Closed this issue · 4 comments

Hi, I am doing SLAM on GPU recently. I am new to this. Could you explain how the GPU speed up the SLAM? Thanks very much.

Hi, I am not an expert in this domain but here's my understanding:

  • With the present of GPU you can accelerate sequential programs (in this case ORB-SLAM2) by offloading some sections of the code to GPU, while the rest of the code still uses CPU. GPU can run thousands of threads simultaneously, allowing for faster processing than a CPU while being more power and cost efficient. Below is a picture showing how GPU-acceleration works.

image

  • In ORB-SLAM2, we have three main threads: Tracking, Local Mapping, Loop Closing, with Tracking thread being the biggest bottleneck: new image frame cannot be processed until the Tracking thread is finished with the last frame. Thus it is the main focus for some of the enhancement works on ORB-SLAM2.
    Of course, every SLAM algorithm is different so you must determine the bottleneck in your case and focus on that one to put GPU-acceleration into effect.

  • This repo uses the GPU-optimization work by yunchih, which improves several processes in the Tracking thread, i.e. FAST corner detection and ORB feature extraction, with CUDA APIs. In general, where possible you can parallelize all OpenCV functions by replacing them with OpenCV CUDA functions.

I would recommend you read the authors' report here if you want to learn the details of their work.

Hope this helps.

Thanks very much.
But I have learned that GPU also can do the matching in SLAM . Have you tried it in GPU?

I have not tried that personally. There are some papers that might be helpful for you:
GPU-Accelerated Real-Time Stereo Matching
Parallel, Real-Time Visual SLAM.

Great! Thanks very much.