thinng/GraphDTA

Questions about 1D convolution

Closed this issue · 1 comments

Thank you for your excellent work. I have carefully read your code and have some questions about the one-dimensional convolution. You embed the protein sequence into 128 dimensions. So, the dimension of a batch of protein embedding matrix is [512, 1000, 128]. You do not exchange the last two dimensions. Therefore, the one-dimensional convolution will be executed on the last dimension, However, one-dimensional convolution is usually performed in the sequence dimension, that is to exchange the last two dimensions (permute). In short, for nn.Conv1d( ), your input is [batch_size, sequence length, embedded_dim], while the input required by pytorch should be [batch_size, embedded_dim, sequence length]. I think there is a problem here.

@thinng @qwiouer hello!!Have you solved this problem? , I also found this problem in the protein embedded part