This is a high school project where I delved into drone building and optimization of its use cases through an attempt to add an element of autonomy.
The goal of the project was to produce a simple drone, from scratch, with manual controls; then, take it a step further with the ability to detect objects and return information about them – such as what they are and where they are. The drone built was a 250 quadcopter; the tech specs are available in the documentation pdf. The object detection and data processing features are ran on a Raspberry Pi model 3 B+.
Regarding Object detection:
The CNN model used is a Caffe (A deep learning framework) version of the original TensorFlow implementation of the MobileNet SSD architecture. It was trained by chuanqi305. Upon training with the COCO dataset up to 20 objects can be detected by the drone with a mean average precision of 72.7%.
Here is some videos produced in the duration of this project:
My scope briefly, was a ‘seeing’ autonomous drone. The way I thought of it and structured my solution consisted of letting a Python script control the drone (using DroneKit) instead of me and getting it to perform a task for me such as detect the objects surrounding it and letting me know about it then doing something else – like landing if you see a person ahead. The code for that Python script is here. Feel free to criticize or offer suggestions. I will continue adding features and upgrade this drone with time.
Documentation:
<iframe src="documentation.pdf" width="100%" height="500px"> </iframe>