Issues with test_seesr_turbo with solutions
mashoutsider opened this issue · 2 comments
Great work on this - lots of improvements - this is a great, and memory efficient approach.
when I navigate to the sd-turbo model using the provided link there is no feature_extractor. I used the sd 2.1 feature extractor.
I think the below is a bug? Seems to be looking for unet in two places. I assume the correct path is args.seesr_model_path?
unet = UNet2DConditionModel.from_pretrained_orig(args.pretrained_model_path, args.seesr_model_path, subfolder="unet", use_image_cross_attention=True)
Hello, Your handling of the feature_extractor
is necessary, but kindly remind you that seesr is based on sd2-base, not sd2-1. This has already been corrected in the README.
Please put the pretrained sd-turbo model into args.pretrained_model_path
and put the seesr model into args.seesr_model_path
.
The former provides pretrained unet, and the latter provides the soft prompt cross attention modules.
Delete args.pretrained_model_path
in unet = UNet2DConditionModel.from_pretrained_orig(args.pretrained_model_path, args.seesr_model_path, subfolder="unet", use_image_cross_attention=True)