laugh12321/TensorRT-YOLO
🚀 你的YOLO部署神器。TensorRT Plugin、CUDA Kernel、CUDA Graphs三管齐下,享受闪电般的推理速度。| Your YOLO Deployment Powerhouse. With the synergy of TensorRT Plugins, CUDA Kernels, and CUDA Graphs, experience lightning-fast inference speeds.
C++GPL-3.0
Issues
- 3
[Help]: host_config.h: fatal error C1189: #error: – unsupported Microsoft Visual Studio version!
#31 opened by auner2456889 - 6
- 8
[Help]: 如何不再是传入图片或者目录,而是直接传入cv2的numpy数组
#21 opened by woshiagan - 1
[Help]: No module named 'onnx_graphsurgeon'
#30 opened by auner2456889 - 0
[Feature]: Hope to support YOLO-World
#29 opened by Egrt - 2
[Help]: how to export yolov10 to onnx?
#28 opened by zhongqiu1245 - 12
Support streaming video?
#19 opened by johnnynunez - 1
[Help]: 请问如何实现指定GPU进行推理?
#27 opened by leidingzhang - 15
- 1
[Help]: Key Error When Inferencing In Jupyter
#26 opened by Tom-Teamo - 9
[Help]: 无法导出onnx模型
#24 opened by liautumn - 2
[Question]: 关于【使用了 CUDA 核函数来加速前处理过程】在哪里体现
#23 opened by songsong695 - 5
[Question]: 关于EfficientNMS的支持
#22 opened by Tom-Teamo - 4
- 15
[Help]: 为什么我的.cu无法引入memory头文件,但是detect.hpp却可以
#18 opened by ChasePlan - 7
- 3
[Feature]: Hope to support YOLOv8 OBB
#16 opened by yxl23 - 3
[Help]: 如何设置 NMS 参数阈值
#14 opened by liautumn - 4
[Help]: trtexec export tensorrt model failed
#13 opened by fungtion - 2
- 1
请问如何实现tensorrt模型对多视频流输入的同时预测呢?
#17 opened by Zzzames - 1
[Question]: 能在deepstream中使用吗?也没有看到int8的转换文档
#10 opened by tms2003 - 2
- 3
[Question]: 支持NCNN吗
#9 opened by aaafdsf - 7
[Bug]: AttributeError in YOLOv9 Model Export: 'AutoShape' object has no attribute 'fuse'
#8 opened by yaoandy107 - 4
- 1
[Question]: can't export yolov9
#6 opened by twmht - 1
- 1
- 0
[Bug]: YOLOv8 FP16 Engine Exported Fails to Detect Objects – Precision Anomalies
#3 opened by laugh12321 - 0
[Bug]: Engine Deserialization Failed when using YOLOv8 exported engine in detect.py
#2 opened by laugh12321 - 0
[Bug]: pycuda.driver.CompileError: nvcc compilation of kernel.cu failed on Jetson
#1 opened by laugh12321