This is a fork of tjuskyzhang/Scaled-YOLOv4-TensorRT and WongKinYiu/ScaledYOLOv4, which implements Scaled YOLOv4 model in TensorRT. Made this fork as was having trouble getting the model to build properly on the latest version of TensorRT.
- Shoddily hacked together code to get it to build properly in TensortRT version 8. Seems to work, but I did get some warnings about subnormal values at half (FP16) precision.
- Fixed the
gen_wts.py
script which is supposed to accept an argument for the location of the.pt
weights, but is actually hard coded.
- Install dependencies with poetry
- Run
dvc pull
to pull the model weights (or download them from the ScaledYOLOv4 repo) - Run
make start_build
to launch a Nvidia docker container linked to this project - Within the interactive shell, cd to
/yolo
and runbuild.sh
- Model weights will be in
yolov4-p6-tensorrt/build
- yolov4-p6.engine
- libmyplugins.so is needed to add support certain layers, and must be loaded in Triton Inference