Assess interpretability
Opened this issue · 0 comments
gcastro-98 commented
Enhance the decoder CNN by explaining its decisions by:
- Layer Wise Relevance Propagation: there are several
pytorch
implementations. For instance:- Tutorial to apply them to VGG-16 and other arquitectures
- A different flavour also appliable tto pre-trained large VGG models of
torchvision
- Other different suggestions: note residual connections (e.g. those of ResNet) may give continuity problems
- Gradient-based localization (Grad-CAM):
- Other explainability flavours different than pixel-attribution methods:
- Concept Activation Vectors (CAV): check, for instance, captum implementation
- Counterfactual explanations
- Contrastive explanations
- LIME for image classification