SwinTransformer/MIM-Depth-Estimation

About pretrain model

Closed this issue · 3 comments

When I train according to the training script provided by the readme, I get the following information:

size mismatch for layers.0.blocks.0.attn.relative_coords_table: copying a param with shape torch.Size([1, 23, 23, 2]) from checkpoint, the shape in current model is torch.Size([1, 43, 43, 2]).
size mismatch for layers.0.blocks.0.attn.relative_position_index: copying a param with shape torch.Size([144, 144]) from checkpoint, the shape in current model is torch.Size([484, 484]).
size mismatch for layers.0.blocks.1.attn.relative_coords_table: copying a param with shape torch.Size([1, 23, 23, 2]) from checkpoint, the shape in current model is torch.Size([1, 43, 43, 2]).
size mismatch for layers.0.blocks.1.attn.relative_position_index: copying a param with shape torch.Size([144, 144]) from checkpoint, the shape in current model is torch.Size([484, 484]).
size mismatch for layers.1.blocks.0.attn.relative_coords_table: copying a param with shape torch.Size([1, 23, 23, 2]) from checkpoint, the shape in current model is torch.Size([1, 43, 43, 2]).
。。。。。。

The pre-trained model does not match the current model shape size.
How can I solve this problem?I run exactly according to your script and did not modify any code。
Thanks!

I am getting a similar problem, and also haven't modified the code at all.

Bumping this up, does anyone know how to fix this?

This issue has no impact because the relative_coords_table is computed directly during the initialization and does not require loading.