albermax/innvestigate

[REQUEST] Need a little bit detailed explanation for some LRP methods...

Closed this issue · 2 comments

Hi,

Could you kindly explain in detail what rules do each of the following methods uses? In the documentation, it only stated "Special LRP-configuration for ConvNets".

  1. LRPSequentialPresetA
  2. LRPSequentialPresetB
  3. LRPSequentialPresetAFlat
  4. LRPSequentialPresetBFlat

Thank you in advance!

Please checkout https://github.com/berleon/when-explanations-lie/blob/master/when_explanations_lie.py and their paper https://arxiv.org/abs/1912.09818

('LRP CMP $\\alpha1\\beta0$', 'lrp.sequential_preset_a', 'sum', [], {"epsilon": 1e-10}), ('LRP CMP $\\alpha2\\beta1$', 'lrp.sequential_preset_b', 'sum', [], {"epsilon": 1e-10}),

And the other paper describing the Flat flavors: https://arxiv.org/abs/1910.09840v3

Maybe also checkout my paper, https://arxiv.org/abs/2012.10294, section 3.5, for a more formal definition of the composite rule, in which a small constant (e.g. epsilon=E-10) is added to the denominator of the LRP formula in order to reduce the effect of noise for the fully-connected layers.

first, thank you @martindyrba .

in short, Preset uses LRP-epsilon in dense layers and LRP-alpha-beta in conv layers, in general.
Detailed differences are:
PresetA uses alpha=1, beta=0
PresetB uses alpha=2, beta=1

All presets concluding in *Flat applies the LRP-flat rule (ie uniform distribution or relevances only adhering to the model's connectivity structure, disregarding weights and activations) in the lowest convolutional layer.