/VIsulizationCNN

This repo gives an illustration of features extracted from CNN. You can decide the input, whether it is image or whatelse. Make sure the input has three channels input.

Primary LanguagePythonMIT LicenseMIT

VIsulizationCNN

This repo gives an illustration of features extracted from CNN. You can decide the input, whether it is image or whatelse. Make sure the input has three channels input. This repo is initially from https://github.com/utkuozbulak/pytorch-cnn-visualizations, yet the difference is the input data. Here the input data is drawn from IEMOCAP dataset, which is a multimodal one for emotion recognition, and audio is selected as input data. Since this repo is just a demonstration, the code only selects one input at once manually.

The original version of the code shows feature map of images, here the input is spectrogram. Results are as follows: layer_vis_l17_f5_iter95