/LFM

Official PyTorch implementation of the paper: Flow Matching in Latent Space

Primary LanguagePythonGNU Affero General Public License v3.0AGPL-3.0

Table of contents
  1. Installation
  2. Dataset preparation
  3. Training
  4. Testing
  5. Acknowledgments
  6. Contacts

Official PyTorch implementation of "Flow Matching in Latent Space"

Quan Dao·Hao Phung·Binh Nguyen·Anh Tran

VinAI Research

  [Page]    [Paper]   

teaser

Abstract: Flow matching is a recent framework to train generative models that exhibits impressive empirical performance while being relatively easier to train compared with diffusion-based models. Despite its advantageous properties, prior methods still face the challenges of expensive computing and a large number of function evaluations of off-the-shelf solvers in the pixel space. Furthermore, although latent-based generative methods have shown great success in recent years, this particular model type remains underexplored in this area. In this work, we propose to apply flow matching in the latent spaces of pretrained autoencoders, which offers improved computational efficiency and scalability for high-resolution image synthesis. This enables flow-matching training on constrained computational resources while maintaining their quality and flexibility. Additionally, our work stands as a pioneering contribution in the integration of various conditions into flow matching for conditional generation tasks, including label-conditioned image generation, image inpainting, and semantic-to-image generation. Through extensive experiments, our approach demonstrates its effectiveness in both quantitative and qualitative results on various datasets, such as CelebA-HQ, FFHQ, LSUN Church & Bedroom, and ImageNet. We also provide a theoretical control of the Wasserstein-2 distance between the reconstructed latent flow distribution and true data distribution, showing it is upper-bounded by the latent flow matching objective.

Details of the model architectures and experimental results can be found in our following paper:

@article{dao2023lfm,
    author    = {Quan Dao and Hao Phung and Binh Nguyen and Anh Tran},
    title     = {Flow Matching in Latent Space},
    journal   = {arXiv preprint arXiv:2307.08698},
    year      = {2023}
}

Please CITE our paper whenever this repository is used to help produce published results or incorporated into other software.

Installation

Python 3.10 and Pytorch 1.13.1/2.0.0 are used in this implementation. Please install required libraries:

pip install -r requirements.txt

Dataset preparation

For CelebA HQ 256, FFHQ 256 and LSUN, please check NVAE's instructions out.

For higher resolution datasets (CelebA HQ 512 & 1024), please refer to WaveDiff's documents.

For ImageNet dataset, please download it directly from the official website.

Training

All training scripts are wrapped in run.sh. Simply comment/uncomment the relevant commands and run bash run.sh.

Testing

Sampling

Run run_test.sh / run_test_cls.sh with corresponding argument's file.

bash run_test.sh <path_to_arg_file>

Only 1 gpu is required.

These arguments are specified as follows:
MODEL_TYPE=DiT-L/2
EPOCH_ID=475
DATASET=celeba_256
EXP=celeb_f8_dit
METHOD=dopri5
STEPS=0
USE_ORIGIN_ADM=False
IMG_SIZE=256

Argument's files and checkpoints are provided below:

Exp Args FID Checkpoints
celeb_f8_dit test_args/celeb256_dit.txt 5.26 model_475.pth
ffhq_f8_dit test_args/ffhq_dit.txt 4.55 model_475.pth
bed_f8_dit test_args/bed_dit.txt 4.92 model_550.pth
church_f8_dit test_args/church_dit.txt 5.54 model_575.pth
imnet_f8_ditb2 test_args/imnet_dit.txt 4.46 model_875.pth
celeb512_f8_adm test_args/celeb512_adm.txt 6.35 model_575.pth
celeba_f8_adm test_args/celeb256_adm.txt 5.82 ---
ffhq_f8_adm test_args/ffhq_adm.txt 5.82 ---
bed_f8_adm test_args/bed_adm.txt 7.05 ---
church_f8_adm test_args/church_adm.txt 7.7 ---
imnet_f8_adm test_args/imnet_adm.txt 8.58 ---

Please put downloaded pre-trained models in saved_info/latent_flow/<DATASET>/<EXP> directory where <DATASET> is defined as in bash_scripts/run.sh.

Utilities

To measure time, please add --measure_time in the script.

To compute the number of function evaluations of adaptive solver (default: dopri5), please add --compute_nfe in the script.

To use fixed-steps solver (e.g. euler and heun), please add --use_karras_samplers and change two arguments as follow:

METHOD=heun
STEPS=50

Evaluation

To evaluate FID scores, please download pre-computed stats from here and put it to pytorch_fid.

Then run bash run_test_ddp.sh for unconditional generation and bash run_test_cls_ddp.sh for conditional generation. By default, multi-gpu sampling with 8 GPUs is supported for faster compute.

Computing stats for new dataset

pytorch_fid/compute_dataset_stat.py is provided for this purpose.

python pytorch_fid/compute_dataset_stat.py \
  --dataset <dataset> --datadir <path_to_data> \
  --image_size <image_size> --save_path <path_to_save>

Acknowledgments

Our codes are accumulated from different sources: EDM, DiT, ADM, CD, Flow Matching in 100 LOC by François Rozet, and WaveDiff. We greatly appreciate these publicly available resources for research and development.

Contacts

If you have any problems, please open an issue in this repository or ping an email to v.quandm7@vinai.io / tienhaophung@gmail.com.