/yolov5-q

This repo is planning for instance segmentation based on yolov5-6.0 and yolact.

Primary LanguagePythonGNU General Public License v3.0GPL-3.0

Please use yolov5-seg which is faster and more accurate!!!

📖README

  • This repo is plan for instance segmentation based on yolov5 and yolact.
  • The weights in releases support detection only, it's just suit this repo.
  • This repo is experimental, it's semi-finished for now.
  • I wrote some scripts to export yolov5 through tensorrtx which I don't use anymore, so maybe they won't work.
  • mAPmask seems too low compared with mAPbbox, cause this is a naive version, I haven't do many experiments yet.
  • The weights(s, m, l) will releases when I finish my yolov5l training.

✍TODO

  • plot_results
  • process_masks mask cuda out of memory
  • detect_seg.py
  • support flip augmentation
  • val
  • clean dataset.py
  • DetectSegment head support gw
  • smaller gt_masks for saving memory(support train.py only)
  • test scale_coords influence for map
  • nosave
  • train_cfg.py
  • support albumentations
  • Mixup
  • DetectSegment head support gd
  • better way to compute seg loss
  • coco datasets
  • coco eval
  • clean pruning code
  • more powerful mask head
  • areas
  • better visualization
  • looks like plot_masks will make image blur
  • plot_images bug
  • tensorrt export

🖼️Results

🪵Models

Model size
(pixels)
mAPval
bbox
mAPval
mask
Speed
RTX2070 b1
(ms)
params
(M)
FLOPs
@640 (B)
yolov5s-seg 640 38 28.1 8.8ms 7.4M 25.9
yolov5m-seg 640 45.2 33.1 11.2ms 22M 71.1

🎨Quick Start

Installation

Clone repo and install requirements.txt in a Python>=3.7.0 environment, includingPyTorch>=1.7.1.

git clone https://github.com/Laughing-q/yolov5-q.git
cd yolov5-q
pip install -r requirements.txt
pip install -e .
Training

Prepare your objection labels like yolov5 to train objection:

  • training objection
python tools/train.py --data ./data/seg/balloon.yaml --weights weights/yolov5s.pt --epochs 50 --batch-size 8

Prepare your mask labels like below to train instance segmentation, xy is the polygon point of mask:

0 x1 y1 x2 y2 x3 y3 ...
1 x1 y1 x2 y2 x3 y3 x4 y4 x5 y5...
2 x1 y1 x2 y2 x3 y3 x4 y4...
.
.
.

you can also check the coco-segment labels from official yolov5 or my test dataset balloon.

  • training segmentation
python tools/train.py --data ./data/seg/balloon.yaml --weights weights/yolov5s.pt --cfg ./configs/segment/yolov5s_seg.yaml --epochs 50 --batch-size 8 --mask --mask-ratio 8
Evalution
  • eval objection
python tools/val.py --data ./data/seg/balloon.yaml --weights weights/yolov5s.pt --batch-size 8
  • eval segmentation
python tools/val.py --data ./data/seg/balloon.yaml --weights weights/yolov5s.pt --batch-size 8 --mask
Detection and Instance Segmentation
  • detection
python tools/detect.py --source img/dir/video/stream --weights weights/yolov5s.pt
  • instance segmentation
python tools/detect.py --source img/dir/video/stream --weights weights/yolov5s.pt --mask

🖌Tips

  • Plot mask will occupy a lot of cuda memory, so plots=False when training by default, so you may need to run tools/val.py after training for more visualization.
  • process_mask will save a lot of cuda memory, but get rough masks(plots=False).
  • process_mask_unsample will occupy a lot of cuda memory, but get better masks(plots=False).
  • not support wandb and evolve, cause I don't need them.
  • For tools/train.py, just put a --mask option and --cfg option, then you can train instance segmentation.
  • For tools/val.py, just put a --mask option, then you can val instance segmentation.
  • For tools/detect.py, just put a --mask option, then you can do instance segmentation.

🍔Reference