/YOLOv1_from_scratch

Implementation of paper - YOLOv1: You Only Look Once: Unified, Real-Time Object Detection

Primary LanguageJupyter Notebook

Introduction

Building YOLOv1 from scratch based on Tensorflow/Keras framework. The fruits dataset contains three categories like apple, banana and orange. This model detects these fruits on the image and draw bounding on it. The dataset contains 240 training images with four categories (apple, banana, orange, mixed), and the test set contains 60 images.

YOLOv1 paper: https://arxiv.org/abs/1506.02640

🆕 YOLOv2 is released 🆕

The YOLOv2 built from scratch is here!!!. This new repository helps us increase knowledge of anchor boxes.

Method and technique used in this project

YOLOv1 architecture:

YOLO-v1-network-structure-Yolo-v2-Tiny-has-fewer-parameters-than-Yolo-v1-Its-network

YOLOv1 loss:

fe9kH

The model and loss function was built according to the paper. The model contains one Dropout layer with rate = 0.5. Training images was random changed brightness with max_delta = 1 and saturation with lower = 0.5 and upper = 1.5.

Training set contains 240 images and 240 annotation (.xml) files, testing set contains 60 images and 60 annotation (.xml) files.

The model was trained approximately 10000 epochs and that lasts total over 4 days.

Prediction example

Detect apple

Screen Shot 2021-10-19 at 21 32 02

Detect banana

Screen Shot 2021-10-19 at 21 33 45

Detect orange

Screen Shot 2021-10-19 at 21 42 42

Detect mixed

Screen Shot 2021-10-19 at 21 43 58