In this project, VQ-VAE (Vector Quantized VAE) is leveraged to learn the latent representation z of various medical image datasets x from MedMNIST. Similar to VAE (Variational Autoencoder), VQ-VAE consists of an encoder q(z|x) and a decoder p(x|z). But unlike VAE, which generally uses the Gaussian reparameterization trick, VQ-VAE utilizes vector quantization to sample the latent representation z ~ q(z|x). Using vector quantization, it allows VQ-VAE to replace a generated latent variable from the encoder with a learned embedding from a codebook C ∈ RE × D, where E is the number of embeddings and D is the number of latent variable dimensions (or channels in the context of image data). Let X ∈ RH × W × D be the output feature map of the encoder, where H is the height and W is the width. To transform the raw latent variable to the discretized one, first we need to find the Euclidean distance between X and C. This step is essential to determine the closest representation of the raw latent variable to the embedding. The computation of this step is roughly expressed as: (X)2 + (C)2 - 2 × (X × C). This calculation yields Z ∈ RH × W, where each element denotes the index of the nearest embedding of the corresponding latent variable. Then, Z is subject to C to get the final discrete representation. Inspired by the centroid update of K-means clustering, EMA (exponential moving average) is applied during training, which updates in an online fashion involving embeddings in the codebook and the estimated number of members in a cluster.
To discern the latent space, go to here.
Loss of the model at the training stage.
MAE on the training and validation sets.
PSNR on the training and validation sets.
SSIM on the training and validation sets.
Here is the visualization of the latent space:
The latent space of five distinct datasets, i.e., DermaMNSIT, PneumoniaMNIST, RetinaMNIST, BreastMNIST, and BloodMNIST.