doc-doc/HQGA

how much GPU memory does the next QA need?

Opened this issue · 6 comments

how much GPU memory does the next QA need?

Thanks for the question. It needs about 24G for training of the model with batch size 64 with 8 clips for each video, whereas 8G is enough for inference. If you want to train with 16 clips, you need to change batchsize to 32.

Thank you for your reply.

Well,Could you offer the pre-trained bert model? Thanks.

Well, I also need this

Hi, please find the code and mode for BERT finetuning/feature extraction. You should launch a new issue with a proper title, otherwise I may slow in finding your questions..

ok, many thanks. Is the model in nextqa for question features the final model? Or I still need to train? I just utilize the model to extract question feature while the results drop significantly by nearly 5 points.