/EVA-planner

EVA-planner: an EnVironmental Adaptive Gradient-based Local Planner for Quadrotors.

Primary LanguageC++

EVA-planner

EVA-planner: an EnVironmental Adaptive Gradient-based Local Planner for Quadrotors.

0. Overview

Author: Lun Quan, Zhiwei Zhang, Xingguang Zhong, Chao Xu and Fei Gao from ZJU FAST Lab.

Related Paper: EVA-Planner: Environmental Adaptive Quadrotor Planning, Lun Quan, Zhiwei Zhang, Chao Xu and Fei Gao accepted by 2021 IEEE International Conference on Robotics and Automation (ICRA).

Video Links: Google, Bilibili(for Mainland China)

1. File Structrue

  • All planning algorithms along with other key modules, such as mapping, are implemented in adaptive_planner
    • path_searching: includes multi-layer planner (A*, low-MPC and high-MPCC).

    • path_env: includes online mapping algorithms for the planning system (grid map and ESDF(Euclidean signed distance filed)).

    • path_manage: High-level modules that schedule and call the mapping and planning algorithms. Interfaces for launching the whole system, as well as the configuration files are contained here

2. Compilation

Requirements: ubuntu 16.04, 18.04 or 20.04 with ros-desktop-full installation

Step 1. Install Armadillo, which is required by uav_simulator.

sudo apt-get install libarmadillo-dev

Step 2. We use NLopt to solve the non-linear optimization problem. Please follow the Installation process in NLopt Documentation.

Step 3. Clone the code from github.

git clone https://github.com/ZJU-FAST-Lab/EVA-planner.git

Step 4. Compile.

cd EVA-planner
catkin_make

3. Run a simple example.

Open rviz:

source devel/setup.bash
roslaunch plan_manage rviz.launch

Then, open another terminal and run code:

source devel/setup.bash
roslaunch plan_manage simulation.launch

Then you can enter G with the keyboard and use the mouse to select a target.

4. Use GPU or not

Packages in this repo, local_sensing have GPU, CPU two different versions. By default, they are in CPU version for better compatibility. By changing

set(ENABLE_CUDA false)

in the CMakeList.txt in local_sensing packages, to

set(ENABLE_CUDA true)

CUDA will be turned-on to generate depth images as a real depth camera does.

Please remember to also change the 'arch' and 'code' flags in the line of

    set(CUDA_NVCC_FLAGS 
      -gencode arch=compute_61,code=sm_61;
    ) 

in CMakeList.txt, if you encounter compiling error due to different Nvidia graphics card you use. You can check the right code here.

Don't forget to re-compile the code!

local_sensing is the simulated sensors. If ENABLE_CUDA true, it mimics the depth measured by stereo cameras and renders a depth image by GPU. If ENABLE_CUDA false, it will publish pointclouds with no ray-casting. Our local mapping module automatically selects whether depth images or pointclouds as its input.

For installation of CUDA, please go to CUDA ToolKit

Acknowledgements

  • The framework of this repository is based on Fast-Planner by Zhou Boyu who achieves impressive proformance on quaorotor local planning.
  • We use NLopt for non-linear optimization.
  • The hardware architecture is based on an open source implemation from Teach-Repeat-Replan.
  • The benchmark compared in our paper is ICRA2020_RG_SDDM.

Licence

The source code is released under GPLv3 license.

Maintaince

For any technical issues, please contact Lun Quan (lunquan@zju.edu.cn) or Fei GAO (fgaoaa@zju.edu.cn).

For commercial inquiries, please contact Fei GAO (fgaoaa@zju.edu.cn).