This repository walks through an example of LIME(Local Interpretable Model-Agnostic Explanations). Orginial LIME paper - https://arxiv.org/abs/1602.04938
LIME provides a means to explain any black-box classifier or regressor.
Models can be difficult to analyze on a global level but may be possible to analyze for a specific instance.
Desirable characterstics of an explainable model:
- Interpretable
- Local Fidelity
- Model-Agnostic
- Global Perspective
Utilizes:
- https://github.com/marcotcr/lime LIME model
- Pytorch
- ImageNet Pretrained model
- PIL