use_one_betas_per_video does not work when the frame size of provided initial betas is larger than one
yl-1993 opened this issue · 1 comments
yl-1993 commented
When we have initial betas and the frame size is larger than 1, betas will keep its original shape according to this line, no matter whether use_one_betas_per_video
is true or false.
In this way, if initial betas is (21, 10)
, then the fitted betas has 21 different values even when use_one_betas_per_video=True
.
The expected behavior is when use_one_betas_per_video=True
, the fitted betas across frames the same, with actual shape (1, 10)
.
For example,
if self.use_one_betas_per_video:
if 'betas' not in init_dict:
betas = torch.zeros(1, self.body_model.betas.shape[-1]).to(
self.device)
else:
betas = init_dict['betas'].mean(dim=0, keepdim=True)
ret_dict['betas'] = betas
LazyBusyYang commented
Fixed in PR99.