lucidrains/vit-pytorch

MAE bug!

hotco87 opened this issue · 2 comments

I ran the MAE written in README.md, but there was a bug.
Please check the code below.
tokens = tokens + self.encoder.pos_embedding[:, 1:(num_patches + 1)

@hotco87 oops, i broke that with the dual patchnorm architectural update

should be fixed in 1.0.2!

@hotco87 oops, i broke that with the dual patchnorm architectural update

should be fixed in 1.0.2!

it maybe still this bug?have u changed it? i met the error