Question about the code in basic gcn unit
daleigehhh opened this issue · 0 comments
When I inspect your code in the model part, in the ConvTemporalGraphical module:
def __init__(self, in_channels, out_channels, kernel_size, t_kernel_size=1, t_stride=1, t_padding=0, t_dilation=1, bias=True): super(ConvTemporalGraphical,self).__init__() self.kernel_size = kernel_size self.conv = nn.Conv2d( in_channels, out_channels, kernel_size=(t_kernel_size, 1), padding=(t_padding, 0), stride=(t_stride, 1), dilation=(t_dilation, 1), bias=bias)
It seems you did not apply convolution on the graph along the temporal dimension?(I am new to GCNs, If I am wrong, just forget about this.). According to the code fo ST-GCN, they seperate the number of the output channles of this unit to k x out_channels, and then do the conv of the feature maps and the A matrices in the k channels:
` self.conv = nn.Conv2d(
in_channels,
out_channels * kernel_size,
kernel_size=(t_kernel_size, 1),
padding=(t_padding, 0),
stride=(t_stride, 1),
dilation=(t_dilation, 1),
bias=bias)
def forward(self, x, A):
assert A.size(0) == self.kernel_size
x = self.conv(x)
n, kc, t, v = x.size()
x = x.view(n, self.kernel_size, kc//self.kernel_size, t, v)
x = torch.einsum('nkctv,kvw->nctw', (x, A))
return x.contiguous(), A`
Did you do ablation experiments on these settings? Looking forward to your reply, thank you!