/Perception-and-Learning-for-Robotics

Deep-RL-based safety landing using RGB camera on rough terrains. Exam Project for the ETH course "Perception and Learning for Robotics".

Primary LanguageJupyter Notebook

Perception and Learning for Robotics

Deep-RL-based safety landing using RGB camera on rough terrains

This exam project was developed during one semester to take advantage of deep reinforcement learning to train a drone how to land in a realistic and challenging environment (a glacier) using only a mounted RGB camera.

The paper of the project can be read in Report.pdf.

The project is an expansion of the work done by Nasib Naimi in his semester thesis at the Autonomous Systems Lab ETH Zurich (his thesis is available in real-lsd/Nasib report.pdf).

Framework

For the deep reinforcement learning framework, the OpenAI algorithm Proximal Policy Optimization 2 (PPO2) is employed, while Unreal Engine 4.16 is used for the simulation. The Multi-Layer Perceptron (MLP) policy and the Convolutional Neural Network (CNN) policy were tested feeding as input the RGB images captured from the camera mounted under the drone, looking downwards towards the ground (pitch of -90◦).

UnrealCV and Gym-UnrealCV are exploited to connect the RL framework with the simulation of the environment.

ppt_framework

Setup

The repository is composed of two folders: