/yolov5-sahi

Using SAHI with YOLOv5 algorithm!

Primary LanguagePython

YOLOv5 + SAHI (Slicing Aided Hyper Inference)

Overview

  • Object detection and instance segmentation are by far the most important fields of applications in Computer Vision. However, detection of small objects and inference on large images are still major issues in practical usage. Here comes the SAHI to help developers overcome these real-world problems with many vision utilities.

Standard Inference with a YOLOv5 Model

Sliced Inference with a YOLOv5 Model (YOLOv5 + SAHI)

Installations

  • A virtual environment is created for the system. (Assuming you have Anaconda installed.)
conda create -n yolov5sahi python -y
conda activate yolov5sahi
git clone https://github.com/zahidesatmutlu/yolov5-sahi  # clone
cd yolov5-sahi
pip install -r requirements.txt  # install
pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio===0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
  • Copy the test folder containing the images you will detect and your best.pt weight file to the project folder.
./yolov5-sahi/%here%
  • The file structure should be like this:
yolov5-sahi/
    .idea
    sahi
    test
    venv
    yolov5
    sahi_predict.py

Usage

from sahi.predict import get_prediction, get_sliced_prediction, predict

yolov5_model_path = 'yolov5s.pt' # if you have a pre-trained weight file copy it to the project folder and replace it

model_type = "yolov5"
model_path = yolov5_model_path
model_device = "0" # cuda device, i.e. 0 or 0,1,2,3 or cpu
model_confidence_threshold = 0.8

slice_height = 512
slice_width = 512
overlap_height_ratio = 0.2
overlap_width_ratio = 0.2

source_image_dir = "test/"

predict(
    model_type=model_type,
    model_path=model_path,
    model_device=model_device,
    model_confidence_threshold=model_confidence_threshold,
    source=source_image_dir,
    slice_height=slice_height,
    slice_width=slice_width,
    overlap_height_ratio=overlap_height_ratio,
    overlap_width_ratio=overlap_width_ratio,
)

Citation

If you use this package in your work, please cite it as:

@article{akyon2022sahi,
  title={Slicing Aided Hyper Inference and Fine-tuning for Small Object Detection},
  author={Akyon, Fatih Cagatay and Altinuc, Sinan Onur and Temizel, Alptekin},
  journal={2022 IEEE International Conference on Image Processing (ICIP)},
  doi={10.1109/ICIP46576.2022.9897990},
  pages={966-970},
  year={2022}
}
@software{obss2021sahi,
  author       = {Akyon, Fatih Cagatay and Cengiz, Cemil and Altinuc, Sinan Onur and Cavusoglu, Devrim and Sahin, Kadir and Eryuksel, Ogulcan},
  title        = {{SAHI: A lightweight vision library for performing large scale object detection and instance segmentation}},
  month        = nov,
  year         = 2021,
  publisher    = {Zenodo},
  doi          = {10.5281/zenodo.5718950},
  url          = {https://doi.org/10.5281/zenodo.5718950}
}

Resources