Is there any way to speed up the inference
Opened this issue · 4 comments
Coronal-Halo commented
Is there any way to speed up the inference except lowering the number of frames? Does reducing video resolution speed up the inference?
cm-xcju commented
I also have the same problem. I used one 4090 and it needs about 3s for only one sample
Fritskee commented
@Coronal-Halo the videos are internally reduced to 224x224 in dimension, so don't expect that to have a (noticeable) impact.
@cm-xcju 3 seconds for 1 video? That's actually not bad at all, since you have to imagine that you're essentially doing 8 times a LLaVA call, except that it's now batched.
Coronal-Halo commented
Is there a way to perform batch inference on a batch of videos?
Fritskee commented
Is there a way to perform batch inference on a batch of videos?
If you use the search bar, you can find this issue: #40 (comment)