OPEN-AIR-SUN/mars

After shortening the image sequence, the problem of size mismatch occurred while calling load_state_dict.

Closed this issue · 2 comments

Due to hardware resource limitations, I had to shorten the sequence of images from 65-125 to 65-110, but when executing the rendering script, there would be a problem of model size mismatch:size mismatch for background_model.field.embedding_appearance.embedding.weight: copying a param with shape torch.Size([84, 32]) from checkpoint, the shape in current model is torch.Size([70, 32]).
So currently, if I want to solve this problem, I must have a car_nerf_state_dict model that matches the input image sequence?

It has nothing to do with car_nerf_dict. It is about appearance embedding weight in the Nerfacto(and some other nerf) model. It's related to the number of input images(one appearance embedding for one image).
You can shorten the sequence in this way to bypass this error. #108 (comment)
I don't know if you could save any hardware resource by doing this though.

@Nplace-su Thank you very much for your answer. So don’t change the first_frame and last_frame parameters in the loaded config.yaml, but change the sequence you want to actually render in the rendering script?