Using pretrained models/forward hooks in pytorch gives grad as None
Opened this issue · 7 comments
I was attempting to view the feature maps of a pretrained VGG model in PyTorch.
Instead of saving the features in the forward
method of the model, I registered a forward hook with the layer(s) where I wanted to record the features, as below:
class FmapHookGenerator(object):
def __init__(self):
self.data = []
def __call__(self):
def data_saver(model, input, output):
self.data.append(output)
return data_saver
# return model to main
def model_generator():
model = models.vgg11(pretrained=True)
fmg = FmapHookGenerator()
# register fmg hooks
model.feature_maps = fmg.data
return model
rf = PytorchReceptiveField(model_generator)
params = rf.compute([224,224,3])
And I get the below error, (input_tensor.grad
is None):
File "./receptivefield/pytorch.py", line 162, in compute
return super().compute(input_shape=input_shape)
File "./receptivefield/base.py", line 167, in compute
center_offsets=[GridPoint(0, 0)] * self.num_feature_maps
File "./receptivefield/base.py", line 142, in _get_gradient_activation_at_map_center
return self._get_gradient_from_grid_points(points=points, intensity=intensity)
File "./receptivefield/pytorch.py", line 147, in _get_gradient_from_grid_points
torch_grads = self._gradient_function(output_feature_maps)
File "./receptivefield/pytorch.py", line 74, in gradient_function
grads.append(input_tensor.grad.detach().numpy())
AttributeError: 'NoneType' object has no attribute 'detach'
Are forward hooks not usable for obtaining receptive fields?
Hi, to be honest, I have no idea why it should work in your case. You create a standalone instance of hook class:
fmg = FmapHookGenerator()
which does not depend on model, then you assign
model.feature_maps = fmg.data
what is fmg.data
in that case (an empty array []
), for me it seems that it does not contain any model inside or graph.
Did you mean something like that ?
fmg = FmapHookGenerator(model)
PS: I'm not familiar with forward hooks in Pytorch.
Your assumption is correct: I just thought forward hooks would be a nice way to save the intermediate layer outputs.
Instead of fmg = FmapHookGenerator(model)
I have:
for layer in model.children():
layer.register_forward_hook(fmg)
Which means that fmg(layer, input, output)
will occur at each layer during the forward pass, and I save the output of each layer during the forward pass.
I realize my explanation of forward hooks might not be good, so I've linked the tutorial example I read and a PyTorch discussion of using forward hooks.
Also, in case my usage of forward hooks is not proper, can you suggest an alternate way of viewing the feature maps of a pre-trained model like VGG?
Hmm, probably the most bruteforce way to do it is to copy the official implementation of VGG network, for which you can load weights, and then modify code to add outputs of the feature maps (which you want to investigate) to the self.feature_maps
list, check example from README:
def forward(self, x):
self.feature_maps = []
select = [4, 10, 12] #layers indices
for lid, layer in enumerate(self.layers):
x = layer(x)
if lid in select:
self.feature_maps.append(x)
return x
What do you think?
Yes, I am able to view the receptive field(s) if I modify the forward
call of the model's implementation.
I'll try to put together a working forward_hook
-based implementation though, would make it really convenient to view the fields of any CNN.
Thanks for feedback, if you think this could be useful for others, you can make a PR and we could add your hook to this package.
Thanks a lot for your awesome repo. Is there any hook implementation?
Nope sorry, there was no PR from @ahgamut with his feature.