/flashtorch

Visualization toolkit for neural networks in PyTorch! Demo -->

Primary LanguagePythonMIT LicenseMIT

FlashTorch

PyPI - Python Version PyPI PyPI - License DOI Say Thanks!

A Python visualization toolkit, built with PyTorch, for neural networks in PyTorch.

Neural networks are often described as "black box". The lack of understanding on how neural networks make predictions enables unpredictable/biased models, causing real harm to society and a loss of trust in AI-assisted systems.

Feature visualization is an area of research, which aims to understand how neural networks perceive images. However, implementing such techniques is often complicated.

FlashTorch was created to solve this problem!

You can apply feature visualization techniques (such as saliency maps and activation maximization) on your model, with as little as a few lines of code.

It is compatible with pre-trained models that come with torchvision, and seamlessly integrates with other custom models built in PyTorch.

Interested?

Take a look at the quick 3min intro/demo to FlashTorch below!

FlashTorch demo

Want to try?

Head over to example notebooks on Colab!

  • Saliency maps: Saliency map demo

  • Activation maximization: Activation maximization demo

Have questions? Join the chat at https://gitter.im/flashtorch/community

Overview

Installation

If you are installing FlashTorch for the first time:

$ pip install flashtorch

Or if you are upgrading it:

$ pip install flashtorch -U

API guide

These are currently available modules.

  • flashtorch.utils: some useful utility functions for data handling & transformation
  • flashtorch.utils.imagenet: ImageNetIndex class for easy-ish retrieval of class index
  • flashtorch.saliency.backprop: Backprop class for calculating gradients
  • flashtorch.activmax.gradient_ascent: GradientAscent class for activation maximization

You can inspect each module with Python built-in function help. The output of that is available on Quick API Guide for your convenience.

Quickstart

Below, you can find simple demos to get you started, as well as links to some handy notebooks showing additional examples of using FlashTorch.

Image handling (flashtorch.utils)

Saliency maps (flashtorch.saliency)

Saliency in human visual perception is a subjective quality that makes certain things within the field of view stand out from the rest and grabs our attention.

Saliency maps in computer vision provide indications of the most salient regions within images. By creating a saliency map for neural networks, we can gain some intuition on "where the network is paying the most attention to" in an input image.

Using flashtorch.saliency module, let's visualize image-specific class saliency maps of AlexNet pre-trained on ImageNet classification tasks.

Saliency map of great grey owl in Alexnet

The network is focusing on the sunken eyes and the round head for this owl.

Activation maximization (flashtorch.activmax)

Activation maximization is one form of feature visualization that allows us to visualize what CNN filters are "looking for", by applying each filter to an input image and updating the input image so as to maximize the activation of the filter of interest (i.e. treating it as a gradient ascent task with filter activation values as the loss).

Using flashtorch.activmax module, let's visualize images optimized with filters from VGG16 pre-trained on ImageNet classification tasks.

VGG16 conv5_1 filters

Concepts such as 'eyes' (filter 45) and 'entrances (?)' (filter 271) seem to appear in the conv5_1 layer of VGG16.

Visit the notebook above to see what earlier layers do!

How to contribute

Thanks for your interest in contributing!

Please first head over to the Code of Conduct, which helps set the ground rules for participation in communities and helps build a culture of respect.

Still here? Great! There are many ways to contribute to this project. Get started here.

Resources

Talks & blog posts

Reading

Inspiration

Citation

Misa Ogura (2019, September 26).
MisaOgura/flashtorch: 0.1.1 (Version v0.1.1).
Zenodo. http://doi.org/10.5281/zenodo.3461737

Author

Misa Ogura

Medium | twitter | LinkedIn

R&D Software Engineer @ BBC