Question about korean generator
work82mj opened this issue · 2 comments
In the paper, there are no details about korean generator; the number of sample fonts, and the number of seen/unseen characters.
I'm just curious about training details of korean.
The full detail is exactly the same as DM-Font
Korean-handwriting and Thai-printing datasets were built from UhBee fonts and Thai font collection, respectively. To ensure the style diversity of the dataset, one font was selected for each font family in our experiments.
We did not publish the full font files for training and evaluation in DM-Font, because font assets are usually not license-free. You can check the details in DM-Font repository and paper.
We also report more details about Korean experiment in our PAMI extension:
https://arxiv.org/abs/2112.11895
If you are curious about the training iterations, please check the original paper B.2.
In the first phase, we train the model without factorization modules during 800k iterations for Chinese and 200k
iterations for Korean
Closing the issue, assuming the answer resolves the problem.
Please re-open the issue as necessary.