Seg fault while training model with maxpool op
Opened this issue · 0 comments
chethanpk commented
I wanted to test maxpool op using a training script. From the inputs given in a previous issue, I had disabled maxpool to AtenOp bindings so that the resulting graph would result in a maxpool and maxpoolgrad op instead of Aten. But this resulted in segfault while running maxpoolgrad op. Here is my model:
'''
class Net(nn.Module):
def init(self):
super(Net, self).init()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2(x), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x)
'''
My question is: Why is the Aten op conversion needed? How was this decided that a particular set of ops needed to be binded into Aten Ops?
If you need, I can provide the entire training script.