/YOLOSHOW

YOLO SHOW - YOLOv5 / YOLOv7 / YOLOv8 / YOLOv9 / RTDETR GUI based on Pyside6

Primary LanguagePythonGNU Affero General Public License v3.0AGPL-3.0

YOLOSHOW - YOLOv5 / YOLOv7 / YOLOv8 / YOLOv9 / RTDETR GUI based on Pyside6

Introduction

YOLOSHOW is a graphical user interface (GUI) application embed withYOLOv5 YOLOv7 YOLOv8 YOLOv9 RT-DETR algorithm.

English   |   简体中文

YOLOSHOW-Screen

Demo Video

YOLOSHOW v1.x : YOLOSHOW-YOLOv9/YOLOv8/YOLOv7/YOLOv5/RTDETR GUI

YOLOSHOW v2.x : YOLOSHOWv2.0-YOLOv9/YOLOv8/YOLOv7/YOLOv5/RTDETR GUI

Todo List

  • Add YOLOv9 Algorithm

  • Adjust User Interface (Menu Bar)

  • Complete Rtsp Function

  • Support Instance Segmentation ( YOLOv5 & YOLOv8

  • Add RT-DETR Algorithm ( Ultralytics repo)

  • Add Model Comparison Mode(VS Mode)

  • Support Pose Estimation ( YOLOv5 & YOLOv8

Functions

1. Support Image / Video / Webcam / Folder (Batch ) Object Detection

Choose Image / Video / Webcam / Folder (Batch ) in the menu bar on the left to detect objects.

2. Change Models / Hyper Parameters dynamically

When the program is running to detect targets, you can change models / hyper Parameters

  1. Support changing model in YOLOv5 / YOLOv7 / YOLOv8 / YOLOv9 / RTDETR / YOLOv5-seg / YOLOv8-seg dynamically
  2. Support changing IOU / Confidence / Delay time / line thickness dynamically

3. Loading Model Automatically

Our program will automatically detect pt files including YOLOv5 Models / YOLOv7 Models / YOLOv8 Models / YOLOv9 Models that were previously added to the ptfiles folder.

If you need add the new pt file, please click Import Model button in Settings box to select your pt file. Then our program will put it into ptfiles folder.

Notice :

  1. All pt files are named including yolov5 / yolov7 / yolov8 / yolov9 / rtdetr . (e.g. yolov8-test.pt)
  2. If it is a pt file of segmentation mode, please name it including yolov5n-seg / yolov8s-seg . (e.g. yolov8n-seg-test.pt)

4. Loading Configures

  1. After startup, the program will automatically loading the last configure parameters.
  2. After closedown, the program will save the changed configure parameters.

5. Save Results

If you need Save results, please click Save MP4/JPG before detection. Then you can save your detection results in selected path.

6. Support Object Detection and Instance Segmentation

From YOLOSHOW v1.2 ,our work supports both Object Detection and Instance Segmentation. Meanwhile, it also supports task switching between different versions,such as switching from YOLOv5 Object Detection task to YOLOv8 instance task.

7. Support Model Comparison in Both Object Detection and Instance Segmentation

From YOLOSHOW v2.0 ,our work supports compare model performance both Object Detection and Instance Segmentation.

Preparation

Experimental environment

OS : Windows 11 
CPU : Intel(R) Core(TM) i7-10750H CPU @2.60GHz 2.59 GHz
GPU : NVIDIA GeForce GTX 1660Ti 6GB

1. Create virtual environment

create a virtual environment equipped with python version 3.9, then activate environment.

conda create -n yoloshow python=3.9
conda activate yoloshow

2. Install Pytorch frame

Windows: pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
Linux: pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

Change other pytorch version in Pytorch

3. Install dependency package

Switch the path to the location of the program

cd {the location of the program}

Install dependency package of program

pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install "PySide6-Fluent-Widgets[full]" -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install -U Pyside6 -i https://pypi.tuna.tsinghua.edu.cn/simple

4. Add Font

Copy all font files *.ttf in fonts folder into C:\Windows\Fonts

5. Run Program

python main.py

Frames

PythonPytorchStatic Badge

Reference

YOLO Algorithm

YOLOv5 YOLOv7 YOLOv8 YOLOv9

YOLO Graphical User Interface

YOLOSIDE PyQt-Fluent-Widgets