/Yolov3-Object-Detection-from-Drone-feed

This project aims to implement real-time object detection from drone feed using YOLOv3 (You Only Look Once version 3) deep learning architecture. This project was implemented using tools and frameworks such as ROS, Gazebo, Ardupilot and MavROS

Real-time Drone Camera feed based Object Detection simulation in Gazebo

I've always found Image Processing and Computer Vision applications fascinating at the same time I love watching drone flight videos on youtube. As my interest peaked I had to come through with implementing this project and enhance my theoretical and working knowledge at the same time. For a comprehensive Overview of the project, I suggest you check out the following article,

Let's start with the exciting part, the outcome of the project looks something like this,

Simulation Result

! Note unlike other projects this was implemented on a Linux (Ubuntu version 22.04) system

Required Components for the project are,

  1. ROS (Robot Operating System)

Hosts the components

  1. Mavros

Serves as the communication bridge

  1. Ardupilot

Controls the drone

  1. Gazebo

Simulation environment

  1. darknet_ros package

Yolov3 model for object detection

They are structured in the following manner,

Communication Pipeline

Finally, the underlying communication between nodes is described in the following diagram,

rqt graph1

rqt graph2

The main components and foundations used in this project were derived from Intelligent Quads Repos my work was just trying to figure out how to integrate each component and put them together to get the desired outcome.

Documentation

The documentation of this project can be found here

Run Locally

You can either try and explore the Intelligent Quads Repo to customize your implementation.

Nevertheless, for the exact implementation in this project, you can download the different components mentioned earlier from files provided in this repo (any missed files can be derived from the source) and follow the below-given steps,

Create the CMake and XML file for your ROS package or derive and modify the sources.

  1. Run the runway.lauch launch file, this should load the world in gazebo
Roslauch "relative directory" runway.launch
  1. Import the drone_with_camera model into the world gazebo simulation.

You should be able to see the drone present on the runway

  1. Initialize ardupilot and set mode to guided before waiting a few seconds for it to load completely
Roslauch "relative directory" apm.launch

You can now provide flight commands through ardupilot to the drone and watch it move around and also view the camera feed.

  1. You can now run the darknet package
Roslaunch darknet daknet.launch 

The simulation is complete you should now see the objects being detected over the camera feed.

The documentation also contains multiple drone swarm coordination but is not implemented completely.

Socials plug

B.E.Pranav Kumaar Student ID @Amrita Vishwa Vidyapeetham - CB.EN.U4AIE20052

🔥 twitter

LinkedIn

❄️ Github