leaderj1001/Attention-Augmented-Conv2d
Implementing Attention Augmented Convolutional Networks using Pytorch
PythonMIT
Issues
- 1
- 1
- 0
I am working on image size of 256x256x3 for Attention augmented convolution ResUNet so whenenver I start to train model I get OOM when allocating tensor with shape [2,2,256,256,256,256] issue
#30 opened by patilparam-edgeneural - 1
- 0
- 10
Problems of Parameter registration
#7 opened by Kylin9511 - 0
- 2
- 1
- 0
Replace einsum operation with matmul
#26 opened by ananiask8 - 0
- 0
A question about relative position embeddings
#23 opened by 787629504 - 0
- 0
Confused about the size of key_rel
#20 opened by xarryon - 2
Can the width and height be different?
#14 opened by Fangyh09 - 1
- 0
Any reason for dk_k higher than 1?
#17 opened by tntjd7545 - 0
Memory complexity of self attention
#16 opened by nguyenvo09 - 1
kernel_size in qkv_conv
#11 opened by ruslangrimov - 2
Memory blow up issue
#13 opened by sebastienwood - 1
- 1
1d version
#9 opened by akaniklaus - 3
- 2
这里的矩阵乘法应该有问题
#6 opened by flystarhe - 5
- 5
torch.einsum() compatibility
#3 opened by SCoulY - 1
- 1
Possible bugs in relative_logits functions
#1 opened by tqbl