AttributeError: 'bool' object has no attribute '__module__'
fuaolshi opened this issue · 4 comments
菜鸡运行的时候遇到的问题,但不知道是哪里可能有问题
Traceback (most recent call last):
File "run_gaussian_shading.py", line 148, in
main(args)
File "run_gaussian_shading.py", line 21, in main
pipe = InversableStableDiffusionPipeline.from_pretrained(
File "/home/ly/.local/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/mnt/ly/miniconda3/envs/fumin/lib/python3.8/site-packages/diffusers/pipelines/pipeline_utils.py", line 910, in from_pretrained
model = pipeline_class(**init_kwargs)
File "/mnt/ly/models/diffusion/shiwei/Gaussian-Shading_sw/inverse_stable_diffusion.py", line 48, in init
super(InversableStableDiffusionPipeline, self).init(vae,
File "/mnt/ly/models/diffusion/shiwei/Gaussian-Shading_sw/modified_stable_diffusion.py", line 35, in init
super(ModifiedStableDiffusionPipeline, self).init(vae,
File "/mnt/ly/miniconda3/envs/fumin/lib/python3.8/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py", line 241, in init
self.register_modules(
File "/mnt/ly/miniconda3/envs/fumin/lib/python3.8/site-packages/diffusers/pipelines/pipeline_utils.py", line 154, in register_modules
library, class_name = _fetch_class_library_tuple(module)
File "/mnt/ly/miniconda3/envs/fumin/lib/python3.8/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 488, in _fetch_class_library_tuple
library = not_compiled_module.module.split(".")[0]
AttributeError: 'bool' object has no attribute 'module'
Hi! @fuaolshi
It may be that you are using the wrong version of the library, please check if the version you are using corresponds to the one in requirements.txt.
请问解决了吗
I also encountered the same problem
It should be able to run through the following adjustments
First, you could directly set the constructor to 'pass' in modified_stable diffusion.py, as:
class ModifiedStableDiffusionPipeline(StableDiffusionPipeline):
pass
Second, also a constructor, in inverte_stable-diffusion. py, adjust as:
class InversableStableDiffusionPipeline(ModifiedStableDiffusionPipeline):
def __init__(
self,
vae: AutoencoderKL,
text_encoder: CLIPTextModel,
tokenizer: CLIPTokenizer,
unet: UNet2DConditionModel,
scheduler: KarrasDiffusionSchedulers,
safety_checker: StableDiffusionSafetyChecker,
feature_extractor: CLIPImageProcessor,
image_encoder: CLIPVisionModelWithProjection = None,
requires_safety_checker: bool = True,
):
super().__init__(vae, text_encoder, tokenizer, unet, scheduler, safety_checker, feature_extractor, image_encoder, requires_safety_checker)
self.forward_diffusion = partial(self.backward_diffusion, reverse_process=True)
self.count = 0
These two steps should be able to solve the problem, but i also encountered some dtype issues in pipe.unet and vae decoder with bf16, i noticed that the latent_model_input variable in call() in ModifiedStableDiffusionPipeline is half, not bf16, and before the decode process, we also should turn latents into dtype=torch.bfloat16. Not sure if i was the only one who met this.