Explainable AI aims to render model behavior understandable by humans, which can be seen as an intermediate step in extracting causal relations from correlative patterns. Due to the high risk of possible fatal decisions in image-based clinical diagnostics, it is necessary to integrate explainable AI into these safety-critical systems. Current explanatory methods typically assign attribution scores to pixel regions in the input image, indicating their importance for a model's decision. However, they fall short when explaining why a visual feature is used. We propose a framework that utilizes interpretable disentangled representations for downstream-task prediction. Through visualizing the disentangled representations, we enable experts to investigate possible causation effects by leveraging their domain knowledge. Additionally, we deploy a multi-path attribution mapping for enriching and validating explanations. We demonstrate the effectiveness of our approach on a synthetic benchmark suite and two medical datasets. We show that the framework not only acts as a catalyst for causal relation extraction but also enhances model robustness by enabling shortcut detection without the need for testing under distribution shifts.
βββ README.md
βββ LICENSE
βββ requirements.txt - txt file with the environment
βββ run_eval.py - Main script to execute for evaluation
βββ run_head.py - Main script to execute for supervised training
βββ run_tcvae.py - Main script to execute for unsupervised pre-training
βββ configs - Hydra configs
β βββ config_eval.yaml
β βββ config_head.yaml
β βββ config_tcvae.yaml
β βββ callbacks
β βββ datamodule
β βββ evaluation
β βββ experiment
β βββ hydra
β βββ logger
β βββ model
β βββ trainer
βββ data - Data storage folders (each filled after first run)
β βββ DiagVibSix
β βββ ISIC
β βββ MNIST
β βββ models - Trained and saved models
β β βββ dataset_beta - Copied checkpoints per dataset and beta value
β β βββ images - Image export folder
β βββ OCT
βββ logs - Logs and Checkpoints saved per run and date
β βββ runs
β βββ date
β βββ timestamp
β βββ checkpoints
β βββ .hydra
β βββ tensorboard
βββ src
βββ evaluate.py - Evaluation pipeline
βββ train.py - Training pipeline
βββ datamodules - Datamodules scripts
βββ evaluation - Evaluation scripts
βββ models - Lightning modules
βββ utils - Various utility scripts (beta-TCVAE loss etc.)
All essential libraries for the execution of the code are provided in the requirements.txt
file from which a new environment can be created (Linux only). For the R script, please install the corresponding libraries beforehand. Setup package in a conda environment:
git clone https://github.com/IML-DKFZ/m-pax_lib
cd m-pax_lib
conda create -n m-pax_lib python=3.7
source activate m-pax_lib
pip install -r requirements.txt
Depending on your GPU, change the torch and torchvision version in the requirements.txt
file to the respective CUDA supporting version. For CPU only support add trainer.gpus=0
behind every command.
Once the virtual environment is activated, the code can be run as follows:
Running the scripts without any experiment files will start the training and evaluation on mnist. All parameters are defined in the hydra config files and not overwritten by any experiment files. The following commands will first, train the Ξ²-TCVAE loss based model with Ξ² = 4, second train the downstream classification head, and at last evaluate the model. The run_tcvae.py
script also automatically initializes the download and extraction of the dataset at ./data/MNIST
.
python run_tcvae.py
python run_head.py
python run_eval.py
Before training the head, place one of the encoder checkpoints (best or last epoch) from ./logs/runs/date/timestamps/checkpoints
at ./models/mnist_beta=4
and rename them to encoder.ckpt
. Folder can be renamed, but then has to be changed in the config/model/head_model.yaml
and config/evaluation/default.yaml
files. Place the head checkpoint in the same folder and rename it to head.ckpt
. The evaluation script will create automatically an image folder inside, and export all graphics to this location.
For all other experiments in the paper, respective experiment files to overwrite the default parameters were created. The following configurations reproduce the results from the paper for each dataset. You can also add your own experiment yaml files or change the existing ones. For more information see here.
The ISIC and OCT evaluation need a rather large RAM size of ~80Gb. Reduce the batch size in the isic/oct_eval.yaml
file to get less accurate but more RAM sparing results.
python run_tcvae.py +experiment=diagvibsix_tcvae.yaml
python run_head.py +experiment=diagvibsix_head.yaml
python run_eval.py +experiment=diagvibsix_eval.yaml seed=43
These commands run the experiment for the ZGO study. For the other two studies change ZGO to FGO_05 or FGO_20 in the three experiment files.
python run_tcvae.py +experiment=oct_tcvae.yaml
python run_head.py +experiment=oct_head.yaml
python run_eval.py +experiment=oct_eval.yaml seed=48
python run_tcvae.py +experiment=isic_tcvae.yaml
python run_head.py +experiment=isic_head.yaml
python run_eval.py +experiment=isic_eval.yaml seed=47
GIFs traversing the ten latent space features for five observations of each of the three datasets:
Please cite the original publication:
@inproceedings{
klein2022improving,
title={Improving Explainability of Disentangled Representations using Multipath-Attribution Mappings},
author={Lukas Klein and Jo{\~a}o B. S. Carvalho and Mennatallah El-Assady and Paolo Penna and Joachim M. Buhmann and Paul F Jaeger},
booktitle={Medical Imaging with Deep Learning},
year={2022},
url={https://openreview.net/forum?id=3uQ2Z0MhnoE}
}
The code is developed by the authors of the paper. However, it does also contain pieces of code from the following packages:
- Lightning-Hydra-Template by Zalewski, Εukasz et al: https://github.com/ashleve/lightning-hydra-template
- Disentangled VAE by Dubois, Yann et al: https://github.com/YannDubs/disentangling-vae
The m-pax_lib is developed and maintained by the Interactive Machine Learning Group of Helmholtz Imaging and the DKFZ, as well as the Information Science and Engineering Group at ETH ZΓΌrich.