This code is designed to run the yolov7-face in a TensorRT-python environment.
- support webcam and video (but slow image & video)
- support EfficientNMS_TRT
- simplified code and optimized
- C++ code update (using TensorRTEfficientNMS)
This code tested docker-compose version 1.29.2 (important) / docker version 20.10.3 (Ubuntu 20.04, docker/gpu and docker/cpu) This code tested jetpack 5.0.1 (docker/jetson)
This code tested docker image nvcr.io/nvidia/pytorch:22.10-py3 (RTX3090), nvcr.io/nvidia/l4t-pytorch:r35.1.0-pth1.13-py3 (Jetson Orin)
cd ./docker/runtime/gpu
sh compose.sh # setting to docker container
cd yolov7-face
cd ./docker/runtime/jetson
sh compose.sh # setting to docker container
cd yolov7-face
# if you want to latency check append --print-log option
# without 6dof result
python3 detect.py --weights yolov7-tiny-face.pt --source img_path
# with to 6dof result
python3 detect.py --weights yolov7-tiny-face.pt --source img_path --use-dof # (--save-dof for save result)
# if you want to latency check append --print-log option
# without 6dof result
python3 detect.py --weights yolov7-tiny-face.pt --source 0 or video_path # (0 is webcam index)
# with 6dof result
python3 detect.py --weights yolov7-tiny-face.pt --source 0 or video_path --use-dof # (--save-dof for save result)
# if you want to latency check append --print-log option
# without 6dof result
python3 detect.py --weights yolov7-tiny-face.pt --source 'rgb' --use-rs # 'infrared' supported future works
#with 6dof result
python3 detect.py --weights yolov7-tiny-face.pt --source 'rgb' --use-rs --use-dof # (--save-dof for save result)
# convert yolov7-tiny-face.pt to yolov7-tiny-face.onnx
python3 models/export.py --weights yolov7-tiny-face.pt --grid --simplify
# convert yolov7-tiny-face.onnx to yolov7-tiny-face.trt (without nms) # (using fp16)
python3 models/export_tensorrt.py -o yolov7-tiny-face.onnx -e yolov7-tiny-face.trt
# convert yolov7-tiny-face.onnx to end2end.trt (with nms)
python3 models/export_tensorrt.py -o yolov7-tiny-face.onnx -e end2end.trt --end2end
Run image inference
# use pytorch nms
python3 trt_inference/yolo_face_trt_inference.py -e yolov7-tiny-face.trt -i {image_path} -o {output_img_name}
# add --print-log option visualize run fps
# using end2end machine
python3 trt_inference/yolo_face_trt_inference.py -e {trt file} -i image.jpg --end2end
# using end2end video
python3 trt_inference/yolo_face_trt_inference.py -e {trt file} -v face_img.avi --end2end
python3 trt_inference/yolo_face_trt_inference.py -e yolov7-tiny-face.trt -v 0
python3 trt_inference/yolo_face_trt_inference.py -e end2end.trt --end2end -v 0
Original ReadMe (https://github.com/derronqi/yolov7-face)
- Dynamic keypoints
- WingLoss
- Efficient backbones
- EIOU and SIOU
Method | Test Size | Easy | Medium | Hard | FLOPs (B) @640 | Link |
---|---|---|---|---|---|---|
yolov7-lite-t | 640 | 88.7 | 85.2 | 71.5 | 0.8 | |
yolov7-lite-s | 640 | 92.7 | 89.9 | 78.5 | 3.0 | |
yolov7-tiny | 640 | 94.7 | 92.6 | 82.1 | 13.2 | |
yolov7s | 640 | 94.8 | 93.1 | 85.2 | 16.8 | |
yolov7 | 640 | 96.9 | 95.5 | 88.0 | 103.4 | |
yolov7+TTA | 640 | 97.2 | 95.8 | 87.7 | 103.4 | |
yolov7-w6 | 960 | 96.4 | 95.0 | 88.3 | 89.0 | |
yolov7-w6+TTA | 1280 | 96.9 | 95.8 | 90.4 | 89.0 |