xuelunshen/gim

Is there room for optimizing the inference speed and GPU memory consumption of the model?

NickJony opened this issue · 3 comments

congraduation, thanks for your work. Great job!
I would like to inquire about the possibility of optimizing the inference speed and GPU memory usage of the model. Based on my testing, inference on two images takes approximately 3.5 seconds on a 3090, with a maximum memory usage of 17GB.

Thank you for your attention. We're working on it.

Thank you for your attention. We're working on it.

+1. BTW, is there any progress now?

Thank you for your attention. We're working on it.

+1. BTW, is there any progress now?

Currently, we are conducting various experiments and explorations. Please be patient and wait for a little while. Thank you for your interest and attention.