/Pytorch_DMSCN

Pytorch implementation of Deep multimodal subspace clustering networks

Primary LanguageJupyter Notebook

Deep multimodal subspace clustering networks

fig1

Overview

This repository contains the Pytorch implementation of the paper "Deep multimodal subspace clustering networks" by Mahdi Abavisani and Vishal M. Patel. The paper was posted on JSTSP in May 2018.

"Deep multimodal subspace clustering networks" (DMSC) investigated various fusion methods for the task of multimodal subspace clustering, and suggested a new fusion technique called "affinity fusion" as the idea of integrating complementary information from two modalities with respect to the similarities between datapoints across different modalities.

fig1

For more details, please refer the origin repo: https://github.com/mahdiabavisani/Deep-multimodal-subspace-clustering-networks.

Citation

Please use the following to refer to this work in publications:


@ARTICLE{8488484, 
author={M. {Abavisani} and V. M. {Patel}}, 
journal={IEEE Journal of Selected Topics in Signal Processing}, 
title={Deep Multimodal Subspace Clustering Networks}, 
year={2018}, 
volume={12}, 
number={6}, 
pages={1601-1614}, 
doi={10.1109/JSTSP.2018.2875385}, 
ISSN={1932-4553}, 
month={Dec},}

Setup:

Dependencies:

Pytorch, numpy, sklearn, munkres, scipy.

Data preprocessing:

Resize the input images of all the modalities to 32 × 32, and rescale them to have pixel values between 0 and 255. This is for keeping the hyperparameter selections suggested in Deep subspace clustering networks valid.

Save the data in a .mat file that includes verctorized modalities as separate matrices with the names modality_0,modality_1, ... ; labels in a vector with the name Labels; and number of modalities in the variable num_modalities.

A sample preprocessed dataset is available in: Data/EYB_fc.mat