akanazawa/hmr

Custom regressor for additional keypoints

Closed this issue · 7 comments

Hi @akanazawa,

I'm working on a tool trying to eliminate learning a new regressor but rather just clicking certain vertices and therefore defining a new keypoint, i.e.
Screenshot 2020-07-03 at 17 11 54

Then by having the newly defined joint just selecting the N closest vertices and solving the linear matrix equation using least-squares solution.

Could you explain why the original cocoplus regressor is normalized 0 to 1 where the vertices are not?

Hi, looks like a neat tool!!

Sorry I forget most of the details and don't have access to the code right now. I think we used some kind of sparse linear regressor from scipy or something like that. If you're talking about the weights per joint normalized I think it makes sense for them to sum to 1 and therefore in that range! Best!

Hi @akanazawa
sorry to bother you again.
I also feel curious about how to obtain the regressor (e.g. cocoplus_regressor or J_regressor_h36m).
Could you provide some codes or the related process?
Thanks a lot in advance.

Hi @longbowzhang ,

you can check my repo. I have checked the original SMPL2015 paper again and found that they where using non-negative least squares. Originally the mesh was hand segmented in 24 parts and then optimized to a sparse set of vertices and associated weights influencing each joint.
I have just release my interpretation of the tool. So feel free to have a look.

Hi @russoale thanks very much for your reply and your repo.
Just a minor question. As I far as I know, the associated weights for each joint are summed up to one. Can non-negative least squares guarantee that?

Good point. I'm currently using scipy's nnls implementation which doesn't allow adding constraints.
But you can add another equation that weights should sum up to 1 to the system of linear equations.

Hi @russoale,
Sorry to bother you again. I have a question w.r.t the the implementation of the Discriminator.
In function Discriminator_separable_rotations, I found that there is no activation_fn (e.g., relu) used.
I think this conflicts with the paper, right?

Which layer are you referring to? As far as I have checked the code should be consistent with the paper.
In Discriminator_separable_rotations tensorflow contrib's slim package was used. Check the default parameters of the layer's and take a look as well into the arg_scope.