GPU Memory-Usage Control
Opened this issue · 1 comments
Refering to the issue in Project mmdetection-to-tensorrt Multiple batch for one inference was support now. cool ~
However, GPU Memory-Usage will be increased when using multiple batch ( GPU Memory Usage is a tough problems in some project ) . So, how to control the Memory-Usage and then keep GPU Memory-Usage As low as possible ?
is it any params could be set in it ?
Dynamic input shape did need more memory. If your input image have a fix shape, such as 800*1088, set the opt_shape_param as follow should reduce memory usage:
opt_shape_param=[
[
[1,3,1088,800],
[2,3,1088,800],
[4,3,1088,800],
]
]
max_workspace_size
is the temporary space used to do the inference. Some tactics might need a large workspace. Reduce it can also save some memory with the cost of potential accelerate.
I will try optimize my plugin see if I can use less workspace.