To run TensorRT, the model must be built on a local machine
YOLO v7 link : https://github.com/pdh930105/yolov7-face
git clone https://github.com/pdh930105/yolov7-face.git
cd yolov7-face
python3 models/export.py --weights yolov7-tiny-face.pt --grid --simplify
python3 models/export_tensorrt.py -o yolov7-tiny-face_wo_nms.onnx -e end2end_yolo.trt --end2end --max_det 10 --rkpts # using onnx
SixDof model link : https://github.com/pdh930105/6DRepNet.git
git clone https://github.com/pdh930105/6DRepNet.git
cd 6DRepNet
python3 sixdrepnet/export_tensorrt.py -o resnet18_dof.onnx -e end2end_dof.trt
cd dof-inference
python3 dof_trt_inference.py --yolo_engine end2end_orin.trt --dof_engine end2end_dof.trt --source obama.jpg
# --source 'cam' = webcam (future works), 'rgb' = realsense's rgb, 'ir'=realsense's infrared
# --show-img : visualize video/webcam/realsense
# --get-fps : visualize FPS time
# --iter : iteration for calculating FPS time
Required same internet connection (wifi or public ip address)
python3 dof_trt_inference.py --yolo_engine end2end_orin.trt --dof_engine sixdresnet_orin.trt --source rgb --server --ip-addr (local ip)
And then, running unity application, insert orin's local ip