/DroneSoftwares

This Project includes various modules for autonomous quadrotor perception and navigation created by Digital Springs team. dspii.com

Primary LanguagePython

DroneSoftwares

This Project includes various modules for autonomous quadrotor perception and navigation.

The task in hand is tackled through four separate modules:

Tree localization.

The tree localization is powered by OpenCV to receive raw RGB images of the farms with two objectives: 1. Classifying every pixel by tree/non-tree labels 2. Localizing each tree to obtain a map for tree distribution. GPS location of the centerpoint of each tree is used as the key to this map which along with the crown size are the desired output of the tree localization module.

Mental rehearsal.

Each target farm is initially simulated by the mental rehearsal module to produce a navigation plan which meets the sensory requirement. For instance, radar measurement of each tree is not completed unless the drone visits the tree in two perpendicular paths. Once the target farm is specified a rough simulation of the environment is created. This can be initialized by the information that is gathered from google image data by the Tree localization module or by a random tree distribution. The path planning algorithm that sits in the heart of this module, initially breaks down the entire search domain into a 3D grid. The Z direction of the grid can be tuned based on the desired minimum allowed altitude, while the X and Y directions are dependent on the size of the target farm. Then, an initial complete coverage or sweeping path is generated to ensure that there will exist no unvisited spot upon completion of the initial sweep. Then, the path planning algorithm generates the step commands for the Airsim API. Airsim is the medium which gives us access to flight dynamics, once in the simulation and once via the MAVLink messages in the deployment stage. During the sweep the required flight commands will be discovered and can be recorded for transferring to deployment. In addition the framework is equipped with RGB and depth camera which are essential for collision-free navigation. While the RGB image stream is equivalent to the GoPro’s image stream at the time of deployment, the depth camera constitutes the potential role of Lidar. The image stream from the RGB camera is then pipelined to the Tree localization module to create the GPS log of tree locations again. This information is then used by the, now trained, path planning algorithm to generate a finalized path which meets the sensor requirements such as the radar’s need for perpendicular pathing above each tree. In addition, estimations on the desired height, power consumption, flight duration and drone speed will be obtained during the mental rehearsal stage to minimize the flight costs at the time of deployment.

Sensor fusion.

This module operates on the onboard processor. It makes calls to GPS device and uses the GPS location and the calls’ timestamp as labels to create a database of radar and camera measurements. These measurements are tested by the path planner to ensure that the sensor requirements are satisfied during the mental rehearsal stage.

Deployment.

The deployment module is an exact replica of the mental rehearsal stage with the exception that the flight dynamics specifications (yaw, pitch, roll), are not passed down to unreal engine for simulation after translation to MAVLink messages. The finalized (or initial complete coverage) paths that are generated by the path planning algorithm are translated to MAVLink messages via Airsim and passed down to pixhawk for the drone’s in-field operation. Both the initialized path and the finalized path can leverage the Tree localization module for post processing what is obtained by the sensor fusion module. In audition, the tree localization data that are created by the drone movement in field can be stored as a database for future measurements in the same field.