LAION-AI/ldm-finetune

Running autoedit.py results in error

Opened this issue · 6 comments

python autoedit.py --edit test.png --model_path inpaint.pt

Traceback (most recent call last):
  File "C:\Users\paperspace\anaconda3\envs\ldm\ldm-finetune\autoedit.py", line 498, in <module>
    args = parse_args()
  File "C:\Users\paperspace\anaconda3\envs\ldm\ldm-finetune\autoedit.py", line 471, in parse_args
    parser.add_argument(
  File "C:\Users\paperspace\anaconda3\envs\ldm\lib\argparse.py", line 1422, in add_argument
    action = action_class(**kwargs)
TypeError: _StoreTrueAction.__init__() got an unexpected keyword argument 'type'

Fixed the issue by deleting line 474 of autoedit.py, and changing the line after from.

action=store_true

to

default="True"

I'll see if I can make a PR when I get home later today.

EDIT:
Nevermind it's still broken. I'll have to come back to it later.

New error after fixing the lines above:

  File "C:\Users\paperspace\anaconda3\envs\ldm\ldm-finetune\autoedit.py", line 500, in <module>
    main(args)
  File "C:\Users\paperspace\anaconda3\envs\ldm\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\paperspace\anaconda3\envs\ldm\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\paperspace\anaconda3\envs\ldm\lib\site-packages\torch\autocast_mode.py", line 12, in decorate_autocast
    return func(*args, **kwargs)
  File "C:\Users\paperspace\anaconda3\envs\ldm\ldm-finetune\autoedit.py", line 295, in main
    image_embed = predict_util.prepare_edit(
  File "C:\Users\paperspace\anaconda3\envs\ldm\ldm-finetune\predict_util.py", line 151, in prepare_edit
    if edit.endswith(".npy"):
AttributeError: 'WindowsPath' object has no attribute 'endswith'```

I can try spinning up my linux VM and see if that has the same issue.

Okay, I rebuilt the repo with the autoedit branch and after trial and error with different python versions, 3.8 seems to work but I ran out of GPU and didn't have time to tweak the prompt to see if I could make it run with the limited 16gb I was working with.

Either way I'm closing this.

This is for the AutoEdit branch.

Autoedit itself runs fine without input however if I give it an input image of 256x256 in either png or npy format it spits out the following error:

Traceback (most recent call last):
  File "autoedit.py", line 422, in <module>
    main(args)
  File "/usr/local/lib/python3.7/dist-packages/torch/autocast_mode.py", line 12, in decorate_autocast
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "autoedit.py", line 182, in main
    device=device,
  File "/content/ldm-finetune/guided_diffusion/predict_util.py", line 183, in load_diffusion_model
    model, diffusion = create_model_and_diffusion(**model_config)
  File "/content/ldm-finetune/guided_diffusion/script_util.py", line 137, in create_model_and_diffusion
    timestep_respacing=timestep_respacing,
  File "/content/ldm-finetune/guided_diffusion/script_util.py", line 450, in create_gaussian_diffusion
    rescale_timesteps=rescale_timesteps,
  File "/content/ldm-finetune/guided_diffusion/respace.py", line 86, in __init__
    super().__init__(**kwargs)
  File "/content/ldm-finetune/guided_diffusion/gaussian_diffusion.py", line 148, in __init__
    assert self.alphas_cumprod_prev.shape == (self.num_timesteps,)
AssertionError

I have tried using various models and settings with no success.

Running the script by itself works fine but with the setting --edit and a PNG file I get the following error:

  File "autoedit.py", line 416, in <module>
    main(args)
  File "/usr/local/lib/python3.7/dist-packages/torch/autocast_mode.py", line 12, in decorate_autocast
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "autoedit.py", line 182, in main
    device=device,
  File "/content/drive/MyDrive/autoedit/ldm-finetune/guided_diffusion/predict_util.py", line 183, in load_diffusion_model
    model, diffusion = create_model_and_diffusion(**model_config)
  File "/content/drive/MyDrive/autoedit/ldm-finetune/guided_diffusion/script_util.py", line 137, in create_model_and_diffusion
    timestep_respacing=timestep_respacing,
  File "/content/drive/MyDrive/autoedit/ldm-finetune/guided_diffusion/script_util.py", line 450, in create_gaussian_diffusion
    rescale_timesteps=rescale_timesteps,
  File "/content/drive/MyDrive/autoedit/ldm-finetune/guided_diffusion/respace.py", line 86, in __init__
    super().__init__(**kwargs)
  File "/content/drive/MyDrive/autoedit/ldm-finetune/guided_diffusion/gaussian_diffusion.py", line 148, in __init__
    assert self.alphas_cumprod_prev.shape == (self.num_timesteps,)
AssertionError```

@KnoBuddy Thanks so much for documenting all this. Sorry, indeed I've been making a lot of fixes over the past week. The original guided-diffusion codebase from OpenAI that this is based on is incredibly esoteric and brittle - so fixing one bug can sometimes result in others.

I'll look into this soon.