Mathux/TEMOS

something wrong in the paper

Closed this issue · 4 comments

In the code, the num_heads of the transformer encoder in config cannot be injected into the .py file because of a spelling error, so it is always 4, which is not equal to num_layer like the paper

And the latent_dim in the paper is set to 256. When num_head is 6, it cannot be divisible, so the multi-head attention mechanism will not take effect, and pytorch will report an error

Hello eanson023,

Thanks for opening this issue, it is indeed a bug, I will fix it.
I called the parameter "num_head" in the config instead of "num_heads".

I am currently reimplementing this code base, to make it better and more clear, and avoid those kind of issues.

I made the new code base available in the TMR repo: http://github.com/Mathux/TMR

Thank you (you can still remember this thing lol😄)