jychoi118/P2-weighting

ffhq train

Opened this issue · 1 comments

I would like to know when you train ffhq -use_fp16 parameter Settings, if you can provide training ffhq training Settings more extended than in the paper similar to the following way I would be grateful,I wonder if this setup is consistent with your training.

python scripts/image_train.py --data_dir ./data --attention_resolutions 16 --class_cond False --diffusion_steps 1000 --dropout 0.0 --image_size 256 --learn_sigma True --noise_schedule linear --num_channels 128 --num_head_channels 64 --num_res_blocks 1 --resblock_updown True --use_fp16 False --use_scale_shift_norm True --lr 2e-5 --batch_size 8 --rescale_learned_sigmas True --p2_gamma 0.5 --p2_k 1 --log_dir logs 

Unfortunately, we did not use fp16 for training. And our full setting is noted in readme, which is:
python scripts/image_train.py --data_dir data/DATASET_NAME --attention_resolutions 16 --class_cond False --diffusion_steps 1000 --dropout 0.0 --image_size 256 --learn_sigma True --noise_schedule linear --num_channels 128 --num_head_channels 64 --num_res_blocks 1 --resblock_updown True --use_fp16 False --use_scale_shift_norm True --lr 2e-5 --batch_size 8 --rescale_learned_sigmas True --p2_gamma 1 --p2_k 1 --log_dir logs