Official implementation of the ECAI 2024 paper:
GRIF-DM: Generation of Rich Impression Fonts using Diffusion Models
GRIF-DM is a diffusion model designed to generate rich impression fonts. This repository provides the official PyTorch implementation of our ECAI 2024 paper. Our model leverages the capabilities of diffusion models to produce high-quality, diverse font styles, pushing the boundaries of generative typography.
We utilize the MyFonts dataset to train our proposed diffusion model from scratch. The dataset contains glyph images of 18,815 fonts, located in the fontimage
folder upon downloading.
- Data Filtering: Fonts with a width-to-height ratio greater than 2:1 have been filtered out to ensure quality.
- Data Organization: The processed data is organized into well-structured training and test sets available in the
train_test_sets/
folder. - Configuration: Please modify the folder paths in
dataset.py
according to your system setup.
For more details on the dataset and its preprocessing, please refer to the following paper:
Tianlang Chen, Zhaowen Wang, Ning Xu, Hailin Jin, and Jiebo Luo. "Large-scale Tag-based Font Retrieval with Generative Feature Learning", IEEE International Conference on Computer Vision (ICCV), 2019.
The training and evaluation processes are handled by train.py
.
To start training the model:
python train.py
- Checkpoints: Model weights are saved in the
weights/
folder every 10 epochs. - Monitoring: Intermediate results are saved in the
outputs/
folder after each epoch for easy monitoring and visualization. - Configuration: Adjust hyperparameters and configurations in train.py as needed.
Evaluation is integrated within train.py
. After training, the script can generate visualizations.
Here are some sample fonts generated by GRIF-DM:
If you find our work helpful for your reasearch or use it as a baseline model, please cite our paper as follows:
@article{kang2024grif,
title={GRIF-DM: Generation of Rich Impression Fonts using Diffusion Models},
author={Kang, Lei and Yang, Fei and Wang, Kai and Souibgui, Mohamed Ali and Gomez, Lluis and Forn{\'e}s, Alicia and Valveny, Ernest and Karatzas, Dimosthenis},
journal={arXiv preprint arXiv:2408.07259},
year={2024}
}