size mismatch for embedding.weight
kayoyin opened this issue · 1 comments
kayoyin commented
Thank you for the open source code!
I am trying to run the validation with the dataset you have provided, as well as the pretrained model "small".
I encounter this problem during evaluation of the validation set:
size mismatch for embedding.weight: copying a param with shape torch.Size([20572, 300]) from checkpoint, the shape in current model is torch.Size([14613, 300]).
Do you know why this happens and how to fix the difference in vocabulary size for the embedding?
Best,
Kayo
cuiyuhao1996 commented
Hi Kayo, I'm not sure if you've changed any of this code. In the released code, here[https://github.com/MILVLG/mcan-vqa/blob/master/core/data/load_data.py#L42], the 'stat_ques_list' are fixed, thus the tokens should be made of [train, val, test, vg]. The [14613] sounds like only [train] or [train, val] is used.