sentinel-hub/eo-flow

Incorrectly implemented MultiHeadAttention in PseTae

sbgeophd opened this issue · 2 comments

The implementation of the PseTae MultiHeadAttention appears to have a mistake.

The original implementation applies 2 fully connected layers on the query tensor (see here: https://github.com/VSainteuf/pytorch-psetae/blob/master/models/tae.py#L133)

However, in the eo-flow implementation, while both fully connected layers are defined, the 2nd (defined here: https://github.com/sentinel-hub/eo-flow/blob/master/eoflow/models/pse_tae_layers.py#L50) is not used as would be expected here: https://github.com/sentinel-hub/eo-flow/blob/master/eoflow/models/pse_tae_layers.py#L66 (indeed, it is not used at all in the code).

Hi @sbgeophd !

Yes, you are correct, the second connected layer is not used. I can't recall if that was done on purpose or not, but I'd guess we just overlooked this part.

Do you have the capacity to make a pull request fixing this issue? If not, we could do it but it might not happen very soon.

Thanks for reporting the issue.

I'm in theory happy to make a pull request, but while I think I've fixed it, the model still doesn't work for me. I'll submit a pull request if/when I get it working.