nmhkahn/torchsummaryX

Layer name should be Layer Type

VascoLopes opened this issue · 0 comments

I have a suggestion regarding the layer name.
In the summary, the Layer name shouldn't be the name of the variable, but the name of the layer type.

Example:

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
        self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
        self.conv2_drop = nn.Dropout2d()
        self.fc1 = nn.Linear(320, 50)
        self.fc2 = nn.Linear(50, 10)

    def forward(self, x):
        x = F.relu(F.max_pool2d(self.conv1(x), 2))
        x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
        x = x.view(-1, 320)
        x = F.relu(self.fc1(x))
        x = F.dropout(x, training=self.training)
        x = self.fc2(x)
        return F.log_softmax(x, dim=1)
summary(Net(), torch.zeros((1, 1, 28, 28)))

returns:

=================================================================
                Kernel Shape     Output Shape  Params Mult-Adds
Layer
0_conv1        [1, 10, 5, 5]  [1, 10, 24, 24]   260.0    144.0k
1_conv2       [10, 20, 5, 5]    [1, 20, 8, 8]   5.02k    320.0k
2_conv2_drop               -    [1, 20, 8, 8]       -         -
3_fc1              [320, 50]          [1, 50]  16.05k     16.0k
4_fc2               [50, 10]          [1, 10]   510.0     500.0

my suggestion return:

=================================================================
                Kernel Shape     Output Shape  Params Mult-Adds
Layer
0_conv2d        [1, 10, 5, 5]  [1, 10, 24, 24]   260.0    144.0k
1_conv2d       [10, 20, 5, 5]    [1, 20, 8, 8]   5.02k    320.0k
2_dropout               -          [1, 20, 8, 8]       -         -
3_linear              [320, 50]          [1, 50]  16.05k     16.0k
4_linear               [50, 10]          [1, 10]   510.0     500.0

This solves the problem of having variables with non-standard names, and makes it easier to identify the layer type.