how much GPU memory does the next QA need?
Opened this issue · 6 comments
vacuum-cup commented
how much GPU memory does the next QA need?
doc-doc commented
Thanks for the question. It needs about 24G for training of the model with batch size 64 with 8 clips for each video, whereas 8G is enough for inference. If you want to train with 16 clips, you need to change batchsize to 32.
vacuum-cup commented
Thank you for your reply.
vacuum-cup commented
Well,Could you offer the pre-trained bert model? Thanks.
LemonQC commented
Well, I also need this
doc-doc commented
Hi, please find the code and mode for BERT finetuning/feature extraction. You should launch a new issue with a proper title, otherwise I may slow in finding your questions..
LemonQC commented
ok, many thanks. Is the model in nextqa for question features the final model? Or I still need to train? I just utilize the model to extract question feature while the results drop significantly by nearly 5 points.