Pinned Repositories
CDAM
Official implementation of CDAM
concept-saliency-maps
Contains the jupyter notebooks to reproduce the results of the paper "Concept Saliency Maps to Visualize Relevant Features in Deep Generative Models" https://arxiv.org/pdf/1910.13140.pdf
Conditional_Diffusion_LIDC
Conditional diffusion model to generate LIDC. Minimal script. Based on 'Classifier-Free Diffusion Guidance'.
ConRad
Code to reproduce the results of our ConRad paper
DeepExplain
A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also includes support for Shapley Values sampling. (ICLR 2018)
dino
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
Feature-Perturbation-Augmentation
This repository contains the code to reproduce the results of our paper Feature Perturbation Augmentation (FPA)
gitignore
A collection of useful .gitignore templates
NoBias-Rectified-Gradient
We introduce a modification of Rectified Gradient. This repository is forked from https://github.com/1202kbs/Rectified-Gradient
saliency
Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).
lenbrocki's Repositories
lenbrocki/concept-saliency-maps
Contains the jupyter notebooks to reproduce the results of the paper "Concept Saliency Maps to Visualize Relevant Features in Deep Generative Models" https://arxiv.org/pdf/1910.13140.pdf
lenbrocki/ConRad
Code to reproduce the results of our ConRad paper
lenbrocki/CDAM
Official implementation of CDAM
lenbrocki/NoBias-Rectified-Gradient
We introduce a modification of Rectified Gradient. This repository is forked from https://github.com/1202kbs/Rectified-Gradient
lenbrocki/Feature-Perturbation-Augmentation
This repository contains the code to reproduce the results of our paper Feature Perturbation Augmentation (FPA)
lenbrocki/Conditional_Diffusion_LIDC
Conditional diffusion model to generate LIDC. Minimal script. Based on 'Classifier-Free Diffusion Guidance'.
lenbrocki/DeepExplain
A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also includes support for Shapley Values sampling. (ICLR 2018)
lenbrocki/dino
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
lenbrocki/gitignore
A collection of useful .gitignore templates
lenbrocki/saliency
Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).
lenbrocki/SliceViewer
Simple Jupyter widget for viewing slices of 3D images
lenbrocki/Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.