WongKinYiu/YOLO

Query regarding gpu and cpu usage

Hemanth-TS opened this issue · 1 comments

when i ran the command with cuda , around 20% of gpu was used and like 200% cpu but it was still very slow, i was trying to infer a video.
python yolo/lazy.py task=inference \ # default is inference
name=AnyNameYouWant \ # AnyNameYouWant
device=cuda\ # hardware cuda, cpu, mps
model=v9-s \ # model version: v9-c, m, s
task.nms.min_confidence=0.1 \ # nms config
task.fast_inference=onnx \ # onnx, trt, deploy
task.data.source=data/toy/images/train \ # file, dir, webcam
+quite=True \ # Quite Output

Can someone guide me on how to infer videos properly with higher fps while using gpu efficiently.

Hi,

Could you please provide your system details, command line, and Git commit version? For example, my setup is:

  • Git Tag/Branch: main (commit add9e2f)
  • OS: Ubuntu 22.04
  • GPU: NVIDIA RTX 3090
  • CPU: AMD Ryzen 9 3900X
    Command:
python yolo/lazy.py task=inference name=AnyNameYouWant device=cuda model=v9-c task.data.source=MOT20-05-raw.mp4 task.fast_inference=deploy

This configuration achieves 20 FPS without using ONNX or TensorRT.

best regards,
Henry Tsui