Are the MLPs same as in the paper?
kuzand opened this issue · 1 comments
kuzand commented
In the paper, the first MLP (64, 64) has two layers and the second MLP (64,128,1024) has three layers.
In the implementation of PointNetfeat here, the first MLP has just one layer of size 64 and the second MLP has two layers of sizes 128 and 1024 as can be seen from the code below:
self.conv1 = torch.nn.Conv1d(3, 64, 1)
self.conv2 = torch.nn.Conv1d(64, 128, 1)
self.conv3 = torch.nn.Conv1d(128, 1024, 1)
It works fine, but to be consistent with the paper the following modifications can be made:
# First MLP
self.conv1 = torch.nn.Conv1d(3, 64, 1)
self.conv2 = torch.nn.Conv1d(64, 64, 1)
# Second MLP
self.conv3 = torch.nn.Conv1d(64, 64, 1)
self.conv4 = torch.nn.Conv1d(64, 128, 1)
self.conv5 = torch.nn.Conv1d(128, 1024, 1)
and then modify the forward method accordingly.
chenxl124578 commented
I also find this difference, and i noted that there is another issue which have the same question.