openai/consistency_models

I use the pre-trained model to generate pictures, the effect is too bad, I don't know what's going on.

stonecropa opened this issue · 9 comments

This command is python image_sample.py --batch_size 8 --training_mode consistency_distillation --sampler multistep --ts 0,67,150 --steps 151 --model_path E:\Googledownload\ct_bedroom256.pt --attention_resolutions 32,16,8 --class_cond False --use_scale_shift_norm False --dropout 0.0 --image_size 256 --num_channels 256 --num_head_channels 64 --num_res_blocks 2 --num_samples 500 --resblock_updown True --use_fp16 True --weight_schedule uniform。
The resulting effect is shown in Fig.
image

I don't know why this happens, is there any good way, thanks.

I met the same bed result as you. May I ask if you change this term: attention_type="default" in unet.py ?

I met the same bed result as you. May I ask if you change this term: attention_type="default" in unet.py ?

I got significantly better image quality after I used attention_type="flash"

I met the same bed result as you. May I ask if you change this term: attention_type="default" in unet.py ?

I got significantly better image quality after I used attention_type="flash"

Thanks.
However, I was unable to use this option attention_type="flash" on the v100. I'm looking for other ways.

@SherlockJane
I have successfully generated good images with. Download this flash-attn ==1.0.2

@SherlockJane I have successfully generated good images with. Download this flash-attn ==1.0.2
i use flash-attn ==1.0.2 ,and attention_type="default" ,and my GPU is V100 ,but i do not get good images.
what is your gpu?

attention_type="flash",

@treefreq v100 and I cannot use attention_type="flash" this option.

Added a pr #37 to get good quality without using flash Attention