This repository contains the dataset and scripts needed for training a YOLOv5 model to detect UAVs (Unmanned Aerial Vehicles) from images captured at multiple angles. The dataset is specifically curated to provide diverse perspectives of UAVs, making it suitable for robust object detection models.
The dataset can be downloaded from the following link:
The dataset is organized as follows:
dataset/
├── images/
│ ├── train/
│ ├── val/
│ └── test/
└── labels/
├── train/
├── val/
└── test/
- images/: Contains the image files for training, validation, and testing.
- labels/: Contains the corresponding label files in YOLO format for each image.
- Total Images:1623
- Image Resolution: 1920x1080 pixels
- Classes: 1 (enemy)
- Python 3.7+
- YOLOv5 Repository
- PyTorch 1.7+
- Other dependencies as listed in the YOLOv5 repository.
-
Clone the YOLOv5 repository:
git clone https://github.com/ultralytics/yolov5.git cd yolov5
-
Install the required dependencies:
pip install -r requirements.txt
-
Copy the dataset to the
yolov5/dataset/
directory.
To train the YOLOv5 model on the UAV dataset:
-
Update the
data.yaml
file in theyolov5/data/
directory to include the path to your dataset and the number of classes.Example
data.yaml
:train: ../dataset/images/train val: ../dataset/images/val nc: 1 # number of classes names: ['enemy'] # class names
-
Start training using the following command:
python train.py --img 640 --batch 16 --epochs 50 --data ./data/data.yaml --weights yolov5s.pt --cache
--img
: Image size for training.--batch
: Batch size.--epochs
: Number of epochs.--data
: Path to thedata.yaml
file.--weights
: Pre-trained weights to start training from.--cache
: Caches images for faster training.
-
Monitor the training process using the provided metrics and adjust parameters as needed.
After training, evaluate the model on the test dataset:
python val.py --data ./data/data.yaml --weights runs/train/exp/weights/best.pt --img 640
This project is licensed under the MIT License - see the LICENSE file for details.
- Ultralytics YOLOv5 for providing a robust object detection framework.
- Contributors who helped create and curate the dataset.