TANGO-ESRGAN is developed to defend against the white-box attack of vision based object tracking system. It contains the online image restoration CNN TANGO-ESRGAN, the real-time object detection and localization CNN(YOLO 5.6.1), and the object tracking to move controller. Our method, which is a computationally efficient denoiser based on Real-ESRGAN, shows several desirable properties for real-time implementation on autonomous systems such as self-driving cars and aerial drones, including faster runtime, lower computational load, higher peak signal-to-noise ratio (PSNR) value for the reconstructed image, improved image resolution and the adaptability to handle a large range of perturbation levels with a fixed network model. Robost Vision Based Object Tracking System is TANGO-ESRGAN based framework.
News:
- Aug 15, 2023: Code for fast autonomous exploration is available now! Check this repo for more details.
- Oct 20, 2022: Multi-level Adaptive Safety Control Framework to assist the end to end automatical landing process for fixed wing type UAV. Check this repo for more details.
Authors: Haotian Gu and Hamid Jafarnejad Sani from the SIT Safe Autonomous System Lab.
Left column represent the attacked dynamic and static object tracking system. Right column represents the test result of TANGO-ESRGAN embedded robust object tracking system. Please click the video introdection here: Demonstrations about this work have been reported on the ICRA 2024: page1, page2,page3. To run this project in minutes, check Quick Start. Check other sections for more detailed information. Please kindly star ⭐ this project if it helps you. We take great efforts to develope and maintain it 😁😁.
- Quick Start
- Algorithms and Papers
- Setup and Config
- Run Simulations
- Use in Your Application
- Updates
- Known issues
Before starting, we recommend you to follow wiki to configure the simulation environment. Activate the ros environment:
conda activate ros_env
The project has been tested on Ubuntu 18.04(ROS Melodic). Take Ubuntu 18.04 as an example, run the following commands to setup:
cd ${YOUR_WORKSPACE_PATH}/src
git clone https://github.com/Robotics/TANGO-ESRGAN.git
Install the dependent software of the TANGO-ESRGAN and YOLO 5.6.1
cd fastdvdnet
pip install -r requirements.txt
Compliling them:
cd ${YOUR_WORKSPACE_PATH}
catkin_make
You may check the detailed instruction to setup the project. After compilation you can run a static object tracking demo:
source devel/setup.bash && roslaunch tcps_image_attack autoflight.launch # object tracking to move demo
run a dynamic object tracking demo:
source devel/setup.bash && roslaunch tcps_image_attack autodriving.launch # object tracking to move demo
Run the online adaptive white-box attack of static object tracking case and its corresponding defencing algorithm:
roslaunch tcps_image_attack train.launch # attack the object localization
roslaunch tcps_image_attack train_denoiser_tangoesrgan.launch # object tracking to move when attack exist
Run the online adaptive white-box attack of dynamic object tracking case and its corresponding defencing algorithm:
roslaunch tcps_image_attack train_w_car.launch # attack the object localization
roslaunch tcps_image_attack train_denoiser_car.launch # object tracking to move when attack exist
Please follow the tutorial in Wiki to configure the simulation environment.
The project contains a collection of robust and computationally efficient algorithms for object tracking to move:
- Kinodynamic path searching
- B-spline-based trajectory optimization
- Topological path searching and path-guided optimization
- Perception-aware planning strategy (to appear) These methods are detailed in our papers listed below. Please cite at least one of our papers if you use this project in your research: Bibtex.
- Multi-level Adaptation for Automatic Landing with Engine Failure under Turbulent Weather, Haotian Gu and Hamid Jafarnejad Sani, AIAA, 2022.
- TANGO-ESRGAN, Haotian Gu and Hamid Jafarnejad Sani, IEEE International Conference on Robotics and Automation (ICRA), 2024.
All planning algorithms along with other key modules, such as object detection CNN, real-time white-box defender and tracking controller, are implemented in Robust_Vision_Based_Object_Tracking_Framework:
- adaptive_white_box_attacker: The GAN functions based reinforcement learning agent as an online image generator is trained to misguide the vehicle according to the adversary’s objective.
- object_detector: conduct the real-time object detection and localization in camera coordinate system(yolo 5.6.1). The detection outcome is post-processed into the list of bounding box coordinates (center, width, height) of the detected object sorted by the confidence of detection.
- adaptive_white_box_defender: TANGO-ESRGAN defend against the white-box attack in a vision-based object tracking system. This TANGO-ESRGAN is developed based on the Real-ESRGAN, a general state of art video and image restoration algorithm to reconstruct the high-resolution image for detector to label and localize the detected object with high confidence.
- object_tracking_controller: Given the estimated target object by the object detector, the autonomous guidance system uses the tracking controller to keep the bounding box of the target at the center of the camera view and the size of the bounding box within a range.
- bspline: A implementation of the B-spline-based trajectory representation.
- bspline_opt: The gradient-based trajectory optimization using B-spline trajectory.
-
Our software is developed and tested in Ubuntu 18.04(ROS Melodic). Follow the documents to install Melodic according to your Ubuntu version.
-
The proposed TANGO-ESRGAN and Robust Vision Based Object Tracking Framework Depend Anaconda, Pytorch Simulator we use Airsim and Unreal Engine 4.27.1. To configure the machine learning environment, please follow WiKi to install Nvidia Driver 495.29.05, CUDA 11.3 and CUDNN 8.2.1.
After the prerequisites are satisfied, you can clone this repository to your catkin workspace and catkin_make. A new workspace is recommended:
cd ${YOUR_WORKSPACE_PATH}/src
git clone https://github.com/Robotics/TANGO-ESRGAN.git
cd ../
catkin_make
If you encounter problems in this step, please first refer to existing issues, pull requests and Google before raising a new issue.
If you have successfully run the simulation and want to use TANGO-ESRGAN and robust vision based object tracking framework in your project, please explore the code files.
-
Oct 20, 2022:
-
April 12, 2023:
-
July 5, 2023:
-
Jan 30, 2024:
The source code is released under license.
This is research code, it is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of merchantability or fitness for a particular purpose.