qfgaohao/pytorch-ssd

RuntimeError: The size of tensor a (12828) must match the size of tensor b (3000) at non-singleton dimension 1

SJavad opened this issue · 3 comments

SJavad commented

Hi 🖐
I Trained an SSD model with --resolution=640 and when I want to convert the .pth model to .onnx, it gives me an error:

    locations[..., :2] * center_variance * priors[..., 2:] + priors[..., :2],
RuntimeError: The size of tensor a (12828) must match the size of tensor b (3000) at non-singleton dimension 1

when I want to convert the model to 300*300 size, it is OK and converts successfully...
but when I set the --width=640 and --height=640, this error happened.

exported command:

Namespace(net='ssd-mobilenet', input='', output='', labels='labels.txt', width=640, height=640, batch_size=1, model_dir='.\\models\\GED\\')
=> running on device cuda:0
=> found best checkpoint with loss 2.568583369255066 (.\models\GED\mb1-ssd-Epoch-6-Loss-2.568583369255066.pth)
creating network:  ssd-mobilenet
num classes:       3
=> loading checkpoint:  .\models\GED\mb1-ssd-Epoch-6-Loss-2.568583369255066.pth
=> exporting model to ONNX...
SJavad commented

A LOT of open issues and... 🤯
no answer...
Hmmm 🤔
Interesting... 🧐

hey!i met the same problem with you! have you solved this problem?looking forward to your reply!

SJavad commented

hey!i met the same problem with you! have you solved this problem?looking forward to your reply!

@Zalways
Hi
I Talk with dusty-nv and get say, pull again the repository (in dusty-nv) repo to solve this problem
seems that is the bug and solved