lucidrains/egnn-pytorch

EGNN_sparse incorrect positional encoding output

josejimenezluna opened this issue · 2 comments

Hi, many thanks for the implementation!

I was quickly checking the code for the pytorch geometric implementation of the EGNN_sparse layer, and I noticed that it expects the first 3 columns in the features to be the coordinates. However, in the update method, features and coordinates are passed in the wrong order.

return self.update((hidden_out, coors_out), **update_kwargs)

This may cause problems during learning (think of concatenating several of these layers), as they expect coordinate and feature order to be consistent.

One can reproduce this behaviour in the following snippet:

layer = EGNN_sparse(feats_dim=1, pos_dim=3, m_dim=16, fourier_features=0)

R = rot(*torch.rand(3))
T = torch.randn(1, 1, 3)

feats = torch.randn(16, 1)
coors = torch.randn(16, 3)
x1 = torch.cat([coors, feats], dim=-1)
x2 = torch.cat([(coors @ R + T).squeeze() , feats], dim=-1)
edge_idxs = (torch.rand(2, 20) * 16).long()

out1 = layer(x=x1, edge_index=edge_idxs)
out2 = layer(x=x2, edge_index=edge_idxs)

After fixing the order of these arguments in the update method then the layer behaves as expected (output features are equivariant, and coordinate features are equivariant upon se(3) transformation)

@josejimenezluna looks great :) do you want to try submitting a PR?

@josejimenezluna solved it here https://github.com/lucidrains/egnn-pytorch/releases/tag/0.0.17 do let us know if this method works for your problem! planning on applying it to alphafold2, and would be nice to here corroboration that it works!