Purpose of this is to implement explainability related stuff as clearly as possible. Ideally will be able to use torch+tensorflow but initially will just focus on torch. This library should be useable irl for explainability related tasks but as the focus is on clarity and not speed it may not be the most ideal in all cases (probably bigger datasets).
Also another big focus on making it clear and usable is to have tests that are less for verifying you implemented correctly and comparing against precomputed values, but making it easier to create tests if you are trying to implement new methods. As such there are some specifics based around testing talked about below. Its somewhat of an overlay over unittest/pytest and a compromise of usability and what I've seen really well written libraries do (for instance some ideas from AllenNLP)
- CAM
- GradCAM
- TCAV
- FGSM
- Perturbation Basics
- Integrated Gradients
- webserver to display some of these!
to install and use, download or clone the repository and then cd into the directory and run pip install:
git clone git@github.com:grahamannett/tellem.git
cd tellem/
pip install -e .
alternatively you can install without examples or tests:
pip install git+https://github.com/grahamannett/tellem.git
for examples of how to implement a new method check out docs/guide
To run the tests run
pytest tests/
- I started using unittest but then looked at moving over to pytest as I know a lot of people recommend. Im somewhat interested in moving stuff from setup_methods these methods to pytest fixtures but also want to make it as simplistic and obvious as possible to make a way to test the explainability method
- Taylor Decomposition
- Layer-wise relevence decomposition