/yolov7-face

yolov7 face detection with landmark

Primary LanguageJupyter NotebookOtherNOASSERTION

yolov7-face-trt


This code is designed to run the yolov7-face in a TensorRT-python environment.

to-do list

  • support webcam and video (but slow image & video)
  • support EfficientNMS_TRT
  • simplified code and optimized
  • C++ code update (using TensorRTEfficientNMS)

Environment Setting

prepare docker & docker compose setting

This code tested docker-compose version 1.29.2 (important) / docker version 20.10.3 (Ubuntu 20.04, docker/gpu and docker/cpu) This code tested jetpack 5.0.1 (docker/jetson)

prepare python library

This code tested docker image nvcr.io/nvidia/pytorch:22.10-py3 (RTX3090), nvcr.io/nvidia/l4t-pytorch:r35.1.0-pth1.13-py3 (Jetson Orin)

GPU setting

cd ./docker/runtime/gpu
sh compose.sh # setting to docker container
cd yolov7-face

jetson setting

cd ./docker/runtime/jetson
sh compose.sh # setting to docker container
cd yolov7-face

demo in pytorch (+ 6dof)

demo yolov7-face image

# if you want to latency check append --print-log option
# without 6dof result
python3 detect.py --weights yolov7-tiny-face.pt --source img_path

# with to 6dof result
python3 detect.py --weights yolov7-tiny-face.pt --source img_path --use-dof # (--save-dof for save result) 


demo yolov7-face webcam (or video)

# if you want to latency check append --print-log option
# without 6dof result
python3 detect.py --weights yolov7-tiny-face.pt --source 0 or video_path # (0 is webcam index)

# with 6dof result
python3 detect.py --weights yolov7-tiny-face.pt --source 0 or video_path --use-dof # (--save-dof for save result) 

demo yolov7-face realsense

# if you want to latency check append --print-log option
# without 6dof result
python3 detect.py --weights yolov7-tiny-face.pt --source 'rgb' --use-rs # 'infrared' supported future works

#with 6dof result
python3 detect.py --weights yolov7-tiny-face.pt --source 'rgb' --use-rs --use-dof # (--save-dof for save result) 

Run TensorRT Model in YoloV7-Face (python environment)

First. convert pytorch to onnx model (without include_nms)

# convert yolov7-tiny-face.pt to yolov7-tiny-face.onnx
python3 models/export.py --weights yolov7-tiny-face.pt --grid --simplify

Second. convert onnx model to trt model (use local machine)

# convert yolov7-tiny-face.onnx to yolov7-tiny-face.trt (without nms) # (using fp16)
python3 models/export_tensorrt.py -o yolov7-tiny-face.onnx -e yolov7-tiny-face.trt 
# convert yolov7-tiny-face.onnx to end2end.trt (with nms)
python3 models/export_tensorrt.py -o yolov7-tiny-face.onnx -e end2end.trt --end2end

Third. run trt model

Run image inference

# use pytorch nms
python3 trt_inference/yolo_face_trt_inference.py -e yolov7-tiny-face.trt -i {image_path} -o {output_img_name}
# add --print-log option visualize run fps
# using end2end machine
python3 trt_inference/yolo_face_trt_inference.py -e {trt file} -i image.jpg --end2end

# using end2end video
python3 trt_inference/yolo_face_trt_inference.py -e {trt file} -v face_img.avi --end2end

Run webcam inference

using torchvision nms

python3 trt_inference/yolo_face_trt_inference.py -e yolov7-tiny-face.trt -v 0

using tensorrt nms

python3 trt_inference/yolo_face_trt_inference.py -e end2end.trt --end2end -v 0

Run Realsense inference

future work update

New feature

  • Dynamic keypoints
  • WingLoss
  • Efficient backbones
  • EIOU and SIOU
Method Test Size Easy Medium Hard FLOPs (B) @640 Link
yolov7-lite-t 640 88.7 85.2 71.5 0.8 google
yolov7-lite-s 640 92.7 89.9 78.5 3.0 google
yolov7-tiny 640 94.7 92.6 82.1 13.2 google
yolov7s 640 94.8 93.1 85.2 16.8 google
yolov7 640 96.9 95.5 88.0 103.4 google
yolov7+TTA 640 97.2 95.8 87.7 103.4 google
yolov7-w6 960 96.4 95.0 88.3 89.0 google
yolov7-w6+TTA 1280 96.9 95.8 90.4 89.0 google

Dataset

WiderFace

yolov7-face-label

Test

Demo

References