VINS-application
Mainly focused on Build process and explanation
VINS-Fusion
, VINS-Fisheye
, OpenVINS
● This repository contains many branches! as following:
- Branch: OAK-D, intel T265, intel D435i, ZED-mini, Pointgrey_myAHRS, FlightGoggles
- Including config.yaml files and Calibration data
- git clone -b <branch_name> --single-branch https://github.com/engcang/vins-application
- Tested on: Jetson Xavier NX, Jetson Xavier AGX, Jetson TX2, Intel i9-10900k, i7-6700k, i7-8700k, i5-9600k
here
Result clips:VINS-Fusion
for PX4 with Masking: here
- frame changed from
world
tomap
Index
0. Algorithms:
- VINS-Fusion CPU version / GPU version
- Mainly uses
Ceres-solver
,OpenCV
andEigen
and performance of VINS is strongly proportional to CPU performance and some parameters
- Mainly uses
- VINS-Fisheye: VINS-Fusion's extension with more
camera_models
andCUDA
acceleration- only for
OpenCV 3.4.1
andJetson TX2
(I guess, I failed on i9-10900k + RTX3080)
- only for
- OpenVINS: MSCKF based VINS
1. Parameters
- VINS-Fusion / VINS-Fisheye / OpenVINS
2. Prerequisites
Ceres solver and Eigen: Mandatory for VINS (build Eigen first)
●CUDA: Necessary for GPU version
●- optional, but recommended with CUDA: cuDNN Optional but strong when used with CUDA
OpenCV with CUDA: Necessary for GPU version
●- optional, but necessary for recent versions: with OpenCV Contrib
- optional, but recommended with CUDA: with cuDNN also with Contrib
● CV_Bridge with Built OpenCV: Necessary for GPU version, and general ROS usage
- for OpenCV 3.x ver / for OpenCV 4.x ver
USB performance: Have to improve performance of sensors with USB
●IMU-Camera Calibration: Synchronization, time offset, extrinsic parameter
●IMU-Camera rotational extrinsic: Rotational extrinsic between IMU and Cam
●3. Installation and Execution
- VINS-Fusion / VINS-Fisheye / OpenVINS
- VINS-Fusion with OpenCV4
Trouble shooting
for VINS-Fusion / VINS-Fisheye / OpenVINS
4. Comparison & Application results
- VINS-Fusion / VINS-Fisheye / OpenVINS
5. VINS on mini onboard PCs
1. Parameters
● VINS-Fusion:
[click to see]
- Camera frame rate
- lower - low time delay, poor performance
- higher - high time delay, better performance
- has to be set from camera launch file: 10~30hz
- Max tracking Feature number max_cnt
- 100~150, same correlation as camera frame rates
- time offset between IMU and cameras estimated_td: 1, td : value from kalibr
- GPU acceleration use_gpu: 1, use_gpu_acc_flow: 1 (for GPU version)
- Threads enabling - multiple_thread: 1
2. Prerequisites
● Ceres solver and Eigen: Mandatory for VINS
[click to see]
$ wget -O eigen.zip https://gitlab.com/libeigen/eigen/-/archive/3.3.7/eigen-3.3.7.zip #check version
$ unzip eigen.zip
$ cd eigen-3.3.7
& mkdir build && cd build
$ cmake .. && sudo make install
- Eigen 3.3.90 version or later for using slicing and Indexing as here
$ git clone https://gitlab.com/libeigen/eigen.git
$ cd eigen
$ mkdir build && cd build
$ cmake .. && sudo make install
- Ceres solver home
$ sudo apt-get install -y cmake libgoogle-glog-dev libatlas-base-dev libsuitesparse-dev
$ wget http://ceres-solver.org/ceres-solver-1.14.0.tar.gz
$ tar zxf ceres-solver-1.14.0.tar.gz
$ mkdir ceres-bin
$ mkdir solver && cd ceres-bin
$ cmake ../ceres-solver-1.14.0 -DEXPORT_BUILD_DIR=ON -DCMAKE_INSTALL_PREFIX="../solver" #good for build without being root privileged and at wanted directory
$ make -j8 # 8 : number of cores
$ make test
$ make install
● CUDA: Necessary for GPU version
[click to see]
- Install CUDA and Graphic Driver:
and 11.2 update 1. doc
● (If you will use TensorRT) The latest TensorRT(7.2.3) supports CUDA 10.2, 11.0 update 1, 11.1 update 1, - Ubuntu
$ sudo apt install gcc make
get the right version of CUDA(with graphic driver) .deb file at https://developer.nvidia.com/cuda-downloads
follow the installation instructions there!
# .run file can be used as nvidia graphic driver. But, .deb file is recommended to install tensorRT further.
# if want to install only graphic driver, get graphic driver install script at https://www.nvidia.com/Download/index.aspx?lang=en-us
# sudo ./NVIDIA_<graphic_driver_installer>.run --dkms
# --dkms option is recommended when you also install NVIDIA driver, to register it along with kernel
# otherwise, NVIDIA graphic driver will be gone after kernel upgrade via $ sudo apt upgrade
$ sudo reboot
$ gedit ~/.bashrc
# type and save
export PATH=<CUDA_PATH>/bin:$PATH #ex: /usr/local/cuda-11.1
export LD_LIBRARY_PATH=<CUDA_PATH>/lib64:$LD_LIBRARY_PATH #ex : /usr/local/cuda-11.1
$ . ~/.bashrc
# check if installed well
$ dpkg-query -W | grep cuda
- check CUDA version using nvcc --version
# check installed cuda version
$ nvcc --version
# if nvcc --version does not print out CUDA,
$ gedit ~/.profile
# type below and save
export PATH=<CUDA_PATH>/bin:$PATH #ex: /usr/local/cuda-11.1
export LD_LIBRARY_PATH=<CUDA_PATH>/lib64:$LD_LIBRARY_PATH #ex : /usr/local/cuda-11.1
$ source ~/.profile
● Trouble shooting for NVIDIA driver or CUDA: please see /var/log/cuda-installer.log or /var/log/nvidia-install.log
- Installation failed. See log at /var/log/cuda-installer.log for details => mostly because of
X server
is being used.- turn off
X server
and install.
- turn off
# if you are using lightdm
$ sudo service lightdm stop
# or if you are using gdm3
$ sudo service gdm3
# then press Ctrl+Alt+F3 -> login with your ID/password
$ sudo sh cuda_<version>_linux.run
- The kernel module failed to load. Secure boot is enabled on this system, so this is likely because it was not signed by a key that is trusted by the kernel....
- turn off
Secure Boot
as below reference - If you got this case, you should turn off
Secure Boot
and then turn offX server
(as above) both.
- turn off
● (optional) cuDNN: strong library for Neural Network used with CUDA
[click to see]
- Download here
- install as below: reference in Korean
$ sudo tar zxf cudnn.tgz
$ sudo cp extracted_cuda/include/* <CUDA_PATH>/include/ #ex /usr/local/cuda-11.1/include/
$ sudo cp -P extracted_cuda/lib64/* <CUDA_PATH>/lib64/ #ex /usr/local/cuda-11.1/lib64/
$ sudo chmod a+r <CUDA_PATH>/lib64/libcudnn* #ex /usr/local/cuda-11.1/lib64/libcudnn*
● OpenCV with CUDA: Necessary for GPU version
[click to see]
- Build OpenCV with CUDA - references: link 1, link 2
- for Xavier do as below or sh file from jetsonhacks here
- If want to use C API (e.g. Darknet YOLO) consider:
- -D OPENCV_GENERATE_PKGCONFIG=YES option is also needed for OpenCV 4.X
- and copy the generated
opencv4.pc
file to/usr/local/lib/pkgconfig
or/usr/lib/aarch64-linux-gnu/pkgconfig
for jetson boards
- and copy the generated
$ sudo apt-get purge libopencv* python-opencv
$ sudo apt-get update
$ sudo apt-get install -y build-essential pkg-config
$ sudo apt-get install -y cmake libavcodec-dev libavformat-dev libavutil-dev \
libglew-dev libgtk2.0-dev libgtk-3-dev libjpeg-dev libpng-dev libpostproc-dev \
libswscale-dev libtbb-dev libtiff5-dev libv4l-dev libxvidcore-dev \
libx264-dev qt5-default zlib1g-dev libgl1 libglvnd-dev pkg-config \
libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev mesa-utils #libeigen3-dev # recommend to build from source : http://eigen.tuxfamily.org/index.php?title=Main_Page
$ sudo apt-get install python2.7-dev python3-dev python-numpy python3-numpy
$ mkdir <opencv_source_directory> && cd <opencv_source_directory>
$ wget -O opencv.zip https://github.com/opencv/opencv/archive/3.4.1.zip # check version
$ unzip opencv.zip
$ cd <opencv_source_directory>/opencv && mkdir build && cd build
# check your BIN version : http://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/
# 8.6 for RTX3080 7.2 for Xavier, 5.2 for GTX TITAN X, 6.1 for GTX TITAN X(pascal), 6.2 for TX2
# -D BUILD_opencv_cudacodec=OFF #for cuda10-opencv3.4
$ cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_C_COMPILER=gcc-6 \
-D CMAKE_CXX_COMPILER=g++-6 \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D OPENCV_GENERATE_PKGCONFIG=YES \
-D WITH_CUDA=ON \
-D CUDA_ARCH_BIN=8.6 \
-D CUDA_ARCH_PTX="" \
-D ENABLE_FAST_MATH=ON \
-D CUDA_FAST_MATH=ON \
-D WITH_CUBLAS=ON \
-D WITH_LIBV4L=ON \
-D WITH_GSTREAMER=ON \
-D WITH_GSTREAMER_0_10=OFF \
-D WITH_QT=ON \
-D WITH_OPENGL=ON \
-D BUILD_opencv_cudacodec=OFF \
-D CUDA_NVCC_FLAGS="--expt-relaxed-constexpr" \
-D WITH_TBB=ON \
../
$ time make -j8 # 8 : numbers of core
$ sudo make install
$ sudo rm -r <opencv_source_directory> #optional
● Trouble shooting for OpenCV build error:
- Please include the appropriate gl headers before including cuda_gl_interop.h => reference 1, 2, 3
- modules/cudacodec/src/precomp.hpp:60:37: fatal error: dynlink_nvcuvid.h: No such file or directory
compilation terminated. --> for CUDA version 10
- => reference here
- cmake ... -D BUILD_opencv_cudacodec=OFF ...
- CUDA_nppicom_LIBRARY not found => reference here
- $ sudo apt-get install nvidia-cuda-toolkit
- or Edit FindCUDA.cmake and OpenCVDetectCUDA.cmake as here
● (Optional) if also contrib for OpenCV should be built,
[click to see]
- add -D OPENCV_EXTRA_MODULES_PATH option as below:
$ cd <opencv_source_directory>
$ wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/3.4.1.zip #check version
$ unzip opencv_contrib.zip
$ cd <opencv_source_directory>/build
$ cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_C_COMPILER=gcc-6 \
-D CMAKE_CXX_COMPILER=g++-6 \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D OPENCV_GENERATE_PKGCONFIG=YES \
-D WITH_CUDA=ON \
-D CUDA_ARCH_BIN=6.2 \
-D CUDA_ARCH_PTX="" \
-D ENABLE_FAST_MATH=ON \
-D CUDA_FAST_MATH=ON \
-D WITH_CUBLAS=ON \
-D WITH_LIBV4L=ON \
-D WITH_GSTREAMER=ON \
-D WITH_GSTREAMER_0_10=OFF \
-D WITH_QT=ON \
-D WITH_OPENGL=ON \
-D BUILD_opencv_cudacodec=OFF \
-D CUDA_NVCC_FLAGS="--expt-relaxed-constexpr" \
-D WITH_TBB=ON \
-D OPENCV_EXTRA_MODULES_PATH=../opencv_contrib-3.4.1/modules \
../
$ time make -j1 # important, use only one core to prevent compile error
$ sudo make install
● (Optional) if also cuDNN for OpenCV with CUDA should be built,
[click to see]
- add -D OPENCV_DNN_CUDA=ON and -D WITH_CUDNN=ON options as below:
$ cd <opencv_source_directory>/build
$ cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_C_COMPILER=gcc-6 \
-D CMAKE_CXX_COMPILER=g++-6 \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D OPENCV_GENERATE_PKGCONFIG=YES \
-D WITH_CUDA=ON \
-D OPENCV_DNN_CUDA=ON \
-D WITH_CUDNN=ON \
-D CUDA_ARCH_BIN=6.2 \
-D CUDA_ARCH_PTX="" \
-D ENABLE_FAST_MATH=ON \
-D CUDA_FAST_MATH=ON \
-D WITH_CUBLAS=ON \
-D WITH_LIBV4L=ON \
-D WITH_GSTREAMER=ON \
-D WITH_GSTREAMER_0_10=OFF \
-D WITH_QT=ON \
-D WITH_OPENGL=ON \
-D BUILD_opencv_cudacodec=OFF \
-D CUDA_NVCC_FLAGS="--expt-relaxed-constexpr" \
-D WITH_TBB=ON \
-D OPENCV_EXTRA_MODULES_PATH=../opencv_contrib-4.5.2/modules \
../
$ time make -j1 #use less cores to prevent compile error
$ sudo make install
● CV_Bridge with built OpenCV: Necessary for whom built OpenCV manually from above
● CV_bridge with OpenCV 3.X version
[click to see]
- For GPU version, if OpenCV with CUDA was built manually, build cv_bridge manually also
$ cd ~/catkin_ws/src && git clone https://github.com/ros-perception/vision_opencv
# since ROS Noetic is added, we have to checkout to melodic tree
$ cd vision_opencv && git checkout origin/melodic
$ gedit vision_opencv/cv_bridge/CMakeLists.txt
- Edit OpenCV PATHS in CMakeLists and include cmake file
#when error, try both lines
find_package(OpenCV 3 REQUIRED PATHS /usr/local/share/OpenCV NO_DEFAULT_PATH
#find_package(OpenCV 3 HINTS /usr/local/share/OpenCV NO_DEFAULT_PATH
COMPONENTS
opencv_core
opencv_imgproc
opencv_imgcodecs
CONFIG
)
include(/usr/local/share/OpenCV/OpenCVConfig.cmake) #under catkin_python_setup()
$ cd .. && catkin build cv_bridge
● CV_bridge with OpenCV 4.X version
[click to see]
- Referred here
$ cd ~/catkin_ws/src && git clone https://github.com/ros-perception/vision_opencv
# since ROS Noetic is added, we have to checkout to melodic tree
$ cd vision_opencv && git checkout origin/melodic
$ gedit vision_opencv/cv_bridge/CMakeLists.txt
- Add options and edit OpenCV PATHS in CMakeLists
# add right after project()
set(CMAKE_CXX_STANDARD 11)
# edit find_package(OpenCV)
#find_package(OpenCV 4 REQUIRED PATHS /usr/local/share/opencv4 NO_DEFAULT_PATH
find_package(OpenCV 4 REQUIRED
COMPONENTS
opencv_core
opencv_imgproc
opencv_imgcodecs
CONFIG
)
include(/usr/local/lib/cmake/opencv4/OpenCVConfig.cmake)
- Edit
cv_bridge/src/CMakeLists.txt
# line number 35, Edit 3 -> 4
if (OpenCV_VERSION_MAJOR VERSION_EQUAL 4)
- Edit
cv_bridge/src/module_opencv3.cpp
// line number 110
// UMatData* allocate(int dims0, const int* sizes, int type, void* data, size_t* step, int flags, UMatUsageFlags usageFlags) const
UMatData* allocate(int dims0, const int* sizes, int type, void* data, size_t* step, AccessFlag flags, UMatUsageFlags usageFlags) const
// line number 140
// bool allocate(UMatData* u, int accessFlags, UMatUsageFlags usageFlags) const
bool allocate(UMatData* u, AccessFlag accessFlags, UMatUsageFlags usageFlags) const
$ cd .. && catkin build cv_bridge
● USB performance : Have to improve performance of sensors with USB
[click to see]
$ sudo ./flash.sh -k kernel -C "usbcore.usbfs_memory_mb=1000" -k kernel-dtb jetson-xavier mmcblk0p1
● Calibration : Kalibr -> synchronization, time offset, extrinsic parameter
[click to see]
- Kalibr -> synchronization, time offset
- For ZED cameras : here
- When Calibrating Fisheye camera like T265
● Trouble shooting for Kalibr errors
- ImportError: No module named Image reference
$ gedit kalibr/aslam_offline_calibration/kalibr/python/kalibr_camera_calibration/MulticamGraph.py
#import Image
from PIL import Image
- focal length initialization error
$ gedit kalibr/aslam_cv/aslam_cameras/include/aslam/cameras/implementation/PinholeProjection.hpp
# edit if sentence in line 781
# comment from line 782 to 795
f_guesses.push_back(2000.0) #initial guess of focal length!!!!
- cameras are not connected
$ gedit kalibr/aslam_offline_calibration/kalibr/python/kalibr_calibrate_cameras
# comment from line 201 to 205
● IMU-Camera rotational extrinsic example
[click to see]
- Between ROS standard body(IMU) and camera
- Left view : Between ROS standard body(IMU) and down-pitched (look downward) camera
3. Installation and Execution
● VINS-Fusion
[with `OpenCV3`(original): click to see]
- git clone and build from source
$ cd ~/catkin_ws/src
$ git clone https://github.com/HKUST-Aerial-Robotics/VINS-Fusion #CPU
or
$ git clone https://github.com/pjrambo/VINS-Fusion-gpu #GPU
$ cd .. && catkin build camera_models # camera models first
$ catkin build
Before build VINS-Fusion, process below could be required.
- For
GPU
version, EditCMakeLists.txt
forloop_fusion
andvins_estimator
$ cd ~/catkin_ws/src/VINS-Fusion-gpu/loop_fusion && gedit CMakeLists.txt
or
$ cd ~/catkin_ws/src/VINS-Fusion-gpu/vins_estimator && gedit CMakeLists.txt
##For loop_fusion : line 19
#find_package(OpenCV)
include(/usr/local/share/OpenCV/OpenCVConfig.cmake)
##For vins_estimator : line 20
#find_package(OpenCV REQUIRED)
include(/usr/local/share/OpenCV/OpenCVConfig.cmake)
[with `OpenCV4`: click to see]
- git clone and build, few
cv
codes are changed from original repo.
$ cd ~/catkin_ws/src
$ git clone https://github.com/engcang/vins-application #Only CPU version yet
$ rm -r vins-fusion-px4
$ cd ..
$ catkin build
● Trouble shooting for VINS-Fusion
[click to see]
- Aborted error when running vins_node :
$ echo "export MALLOC_CHECK_=0" >> ~/.bashrc
$ source ~/.bashrc
- If want to try to deal with NaNs, refer here
● VINS-Fisheye
OpenCV 3.4.1
and Jetson TX2
(I guess yet, I failed on i9-10900k + RTX3080)
only for [click to see]
- Get
libSGM
and install withOpenCV
option as below:
$ git clone https://github.com/fixstars/libSGM
$ cd libSGM
$ git submodule update --init
check and edit CMakeLists.txt
$ gedit CMakeLists.txt
Edit
BUILD_OPENCV_WRAPPER=ON and ENABLE_TESTS=ON
$ mkdir build && cd build
$ cmake .. -DBUILD_OPENCV_WRAPPER=ON -DENABLE_TESTS=ON
$ make -j6
$ sudo make install
do test
$ cd libSGM/build/test && ./sgm-test
- Get
VINS-Fisheye
and install
$ cd ~/catkin_ws/src
$ git clone https://github.com/xuhao1/VINS-Fisheye
$ cd ..
build camera_models first
$ catkin build camera_models
$ gedit src/VINS-Fisheye/vins_estimator/CMakeLists.txt
edit as below:
set(ENABLE_BACKWARD false)
or
$ sudo apt install libdw-dev
$ catkin build
● OpenVINS
[click to see]
$ cd ~/catkin_ws/src
$ git clone https://github.com/rpng/open_vins/
$ cd ..
$ catkin build
4. Comparison & Application
- Conversion ROS topics into nav_msgs/Path to visualize in Rviz: use this github
- Conversion compressed Images into raw Images: use this code
● VINS-Fusion
[click to see]
Simulation
- /tf vs VINS-Mono on FlightGoggles: youtube, with CPU youtube
- Loop Fusion vs vins node on FlightGoggles: youtube
- VINS mono VS ROVIO: youtube
- VINS-Mono vs ROVIO vs ORB-SLAM2: youtube
- VINS-Fusion (Stereo) vs S-MSCKF on FlightGoggles: youtube
- VINS-Fusion (Stereo) based autonomous flight and 3D mapping using RGB-D camera: youtube
Real world
- Hand-held - VINS-Mono with pointgrey cam, myAHRS+ imu on Jetson Xavier: youtube, moved faster : youtube
- Hand-held - VINS(GPU+version) with pointgrey, myAHRS at Intel i7-8700k, TITAN RTX: youtube
- Hand-held - VINS(GPU+version, Stereo) with Intel D435i, on Xavier, max CPU clocked: youtube and youtube2 : screen
- Hand-held - VINS-Fusion (Stereo) with Intel D435i and Pixhawk4 mini fused with T265 camera: here
- Hand-held - VINS-Fusion (stereo) with Intel D435i and Pixhawk4 mini on 1km long underground tunnel: here
- Hand-held - VINS-Fusion GPU version test using T265: here
- Hand-held - VINS-Fusion (stereo) test using OAK-D: here
- Real-Drone - VINS-Fusion with Intel D435i and Pixahwk4 mini on Real Hexarotor: here
- Real-Drone - VINS-Fusion with Intel D435i and Pixahwk4 mini on Real Quadrotor: here
● OpenVINS
[click to see]
- OpenVINS on KAIST VIO dataset: result youtube
- use this launch file including parameters
5. VINS on mini onboard PCs
Qualcomm RB5
vsKhadas VIM3 Pro
- Video