Motion Compensated FlowNet

Event Data Optical Flow Estimation

Odometry Estimation

Abstract

We developed an algorithm to estimate the optical flow of a scene and correspond camera odometry from a sequence of event data. This idea was adapted from the literature "Unsupervised Event-based Learning of Optical Flow, Depth, and Egomotion." Alex Zihao Zhu, Liangzhe Yuan, Kenneth Chaney, Kostas Daniilidis. ArXiV 2018. [1]

Also, check out our event interest point detection project. https://github.com/mingyip/pytorch-superpoint

Installation

The environment is run in python 3.6, Pytorch 1.5.0 and ROS. We ran our code with Ubuntu 18.04 and ROS Melodic. Installation instructions for ROS can be found here. To generate syntheic event data, we used "ESIM: an Open Event Camera Simulator". You may find installation details of ESIM here.

To install conda env

conda create --name py36-sp python=3.6
conda activate py36-sp
pip install -r requirements.txt
pip install -r requirements_torch.txt # install pytorch

To install Ros Melodic

sudo apt-get update
sudo apt-get install ros-melodic-desktop-full
sudo apt install python-rosdep python-rosinstall python-rosinstall-generator python-wstool build-essential

After installed Ros, don't forget to install the Event Camera Simulator.

Dataset

we used data sequences (in ros format) from MVSEC [2] and IJRR (ETH event dataset) [1] to further train our network. This code processes the events in HDF5 format. To convert the rosbags to this format, open a new terminal and source a ROS workspace. We command to use packages from https://github.com/TimoStoff/event_cnn_minimal

source /opt/ros/kinetic/setup.bash
python events_contrast_maximization/tools/rosbag_to_h5.py <path/to/rosbag/or/dir/with/rosbags> --output_dir <path/to/save_h5_events> --event_topic <event_topic> --image_topic <image_topic>

Usage

To train the network with the dataset. To set training parameters, go to config.py file.

python train.py --load_path data/outdoor_day1_data.h5

To Evaluate the Relative Pose Error run with associate.py

The program output an tragetory of the estimated path together with the error rate.

python associate.py gt.txt estimated.txt

Result

Odometry Estimation

The evaluation is done under our evaluation scripts. We evaluated our algorithm using relative pose error corresponds to the drift of the trajectory. Moreover, we also calculated the percentange of outliners >0.5 and >1.0, where 2.0 is the max error rate

Dataset Sequence RPE (median) RPE (mean) % Outlier (>0.5) % Outlier (>1.0)
MVSEC Indoor_flying1 0.3856 0.5213 38.63 15.00
Indoor_flying2 0.3820 0.5333 39.79 15.56
Indoor_flying3 0.4045 0.5684 39.36 20.47
Indoor_flying4 0.5217 0.5919 52.41 17.74
Outdoor_day1* 0.1039 0.1363 1.44 1.44
Outdoor_day2** 0.1301 0.3527 21.78 16.83
Outdoor_night 0.1270 0.2725 15.88 10.90
IRJJ Poster_translation 0.2678 0.6211 41.12 34.67
* tested on no sunlight scene

** training Set

Estimated Optical Flow

Orignal Event Images Deblured Images Estimated Optical Flow

Reference

[1] Sturm, J., Engelhard, N., Endres, F., Burgard, W., & Cremers, D. (2012). A benchmark for the evaluation of RGB-D SLAM systems. 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 573-580.

[2] Zhu, A.Z., Yuan, L., Chaney, K., & Daniilidis, K. (2019). Unsupervised Event-Based Learning of Optical Flow, Depth, and Egomotion. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 989-997.

[3] Zhu, A.Z., Thakur, D., Özaslan, T., Pfrommer, B., Kumar, V., & Daniilidis, K. (2018). The Multivehicle Stereo Event Camera Dataset: An Event Camera Dataset for 3D Perception. IEEE Robotics and Automation Letters, 3, 2032-2039.

[4] Mueggler, E., Rebecq, H., Gallego, G., Delbrück, T., & Scaramuzza, D. (2017). The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM. The International Journal of Robotics Research, 36, 142 - 149.

[5] Stoffregen, T., Scheerlinck, C., Scaramuzza, D., Drummond, T., Barnes, N., Kleeman, L., & Mahony, R. (2020). Reducing the Sim-to-Real Gap for Event Cameras. ECCV.