SpenserCai/sd-webui-deoldify

Experimenting with included demo.jpeg

lavalava45 opened this issue · 7 comments

Experimenting with included demo.jpeg

Starting job extras
*** Error completing request
*** Arguments: (0, <PIL.Image.Image image mode=RGB size=793x468 at 0x190C2038730>, None, '', '', True, 0, 1, 512, 512, True, 'None', 'None', 0, 0, 0, 0, True, 35, False) {}
Traceback (most recent call last):
File "D:\stable-diffusion-webui\modules\call_queue.py", line 58, in f
res = list(func(*args, **kwargs))
File "D:\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "D:\stable-diffusion-webui\modules\postprocessing.py", line 62, in run_postprocessing
scripts.scripts_postproc.run(pp, args)
File "D:\stable-diffusion-webui\modules\scripts_postprocessing.py", line 130, in run
script.process(pp, **process_args)
File "D:\stable-diffusion-webui\extensions\sd-webui-deoldify\scripts\postprocessing_deoldify.py", line 63, in process
pp.image = self.process_image(pp.image, render_factor, artistic)
File "D:\stable-diffusion-webui\extensions\sd-webui-deoldify\scripts\postprocessing_deoldify.py", line 55, in process_image
vis = get_image_colorizer(root_folder=Path(paths_internal.models_path),render_factor=render_factor, artistic=artistic)
File "D:\stable-diffusion-webui\extensions\sd-webui-deoldify\deoldify\visualize.py", line 417, in get_image_colorizer
return get_stable_image_colorizer(root_folder=root_folder, render_factor=render_factor)
File "D:\stable-diffusion-webui\extensions\sd-webui-deoldify\deoldify\visualize.py", line 426, in get_stable_image_colorizer
learn = gen_inference_wide(root_folder=root_folder, weights_name=weights_name)
File "D:\stable-diffusion-webui\extensions\sd-webui-deoldify\deoldify\generators.py", line 19, in gen_inference_wide
learn.load(weights_name)
File "D:\stable-diffusion-webui\extensions\sd-webui-deoldify\fastai\basic_train.py", line 271, in load
state = torch.load(source, map_location=device)
File "D:\stable-diffusion-webui\modules\safe.py", line 108, in load
return load_with_extra(filename, *args, extra_handler=global_extra_handler, **kwargs)
File "D:\stable-diffusion-webui\modules\safe.py", line 156, in load_with_extra
return unsafe_torch_load(filename, *args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\serialization.py", line 815, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\serialization.py", line 1051, in _legacy_load
typed_storage._untyped_storage._set_from_file(
RuntimeError: unexpected EOF, expected 7913014 more bytes. The file might be corrupted.


2023-08-07 18:32:29 INFO [httpx] HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
2023-08-07 18:32:29 INFO [httpx] HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"
HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"

Please confirm whether --disable-safe-unpickle is loaded in the start command. At the same time, confirm whether the model is downloaded completely.

you can download the model from here then put to stable-diffusion-webui\models\deoldify\

Hello again! Had to redownload all the models to be sure that the problem is not with them,
and here is my COMMANDLINE_ARGS=--opt-sdp-attention --opt-split-attention --autolaunch --deepdanbooru --api --disable-safe-unpickle

Restarted WebUI once more. Forgot to report that it works with "artistic" option activated but still throwing an error without it:

2023-08-07 22:42:09 INFO [modules.shared] Starting job extras
Starting job extras
*** Error completing request
*** Arguments: (0, <PIL.Image.Image image mode=RGB size=2400x1770 at 0x191478CF6D0>, None, '', '', True, 0, 1, 512, 512, True, 'None', 'None', 0, 0, 0, 0, True, 35, False) {}
Traceback (most recent call last):
File "D:\stable-diffusion-webui\modules\call_queue.py", line 58, in f
res = list(func(*args, **kwargs))
File "D:\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "D:\stable-diffusion-webui\modules\postprocessing.py", line 62, in run_postprocessing
scripts.scripts_postproc.run(pp, args)
File "D:\stable-diffusion-webui\modules\scripts_postprocessing.py", line 130, in run
script.process(pp, **process_args)
File "D:\stable-diffusion-webui\extensions\sd-webui-deoldify\scripts\postprocessing_deoldify.py", line 63, in process
pp.image = self.process_image(pp.image, render_factor, artistic)
File "D:\stable-diffusion-webui\extensions\sd-webui-deoldify\scripts\postprocessing_deoldify.py", line 55, in process_image
vis = get_image_colorizer(root_folder=Path(paths_internal.models_path),render_factor=render_factor, artistic=artistic)
File "D:\stable-diffusion-webui\extensions\sd-webui-deoldify\deoldify\visualize.py", line 417, in get_image_colorizer
return get_stable_image_colorizer(root_folder=root_folder, render_factor=render_factor)
File "D:\stable-diffusion-webui\extensions\sd-webui-deoldify\deoldify\visualize.py", line 426, in get_stable_image_colorizer
learn = gen_inference_wide(root_folder=root_folder, weights_name=weights_name)
File "D:\stable-diffusion-webui\extensions\sd-webui-deoldify\deoldify\generators.py", line 19, in gen_inference_wide
learn.load(weights_name)
File "D:\stable-diffusion-webui\extensions\sd-webui-deoldify\fastai\basic_train.py", line 271, in load
state = torch.load(source, map_location=device)
File "D:\stable-diffusion-webui\modules\safe.py", line 108, in load
return load_with_extra(filename, *args, extra_handler=global_extra_handler, **kwargs)
File "D:\stable-diffusion-webui\modules\safe.py", line 156, in load_with_extra
return unsafe_torch_load(filename, *args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\serialization.py", line 815, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\serialization.py", line 1051, in _legacy_load
typed_storage._untyped_storage._set_from_file(
RuntimeError: unexpected EOF, expected 7913014 more bytes. The file might be corrupted.


2023-08-07 22:42:22 INFO [httpx] HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
2023-08-07 22:42:22 INFO [httpx] HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"
HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"

from the error message, it is indeed an error caused by incomplete model files. You can try to compare the SHA256 of the model. For example, ColorizeArtistic_gen.pth's SHA256 is: 3f750246fa220529323b85a8905f9b49c0e5d427099185334d048fb

HugginFace shows another hash number for this model - it's
SHA256: 3f750246fa220529323b85a8905f9b49c0e5d427099185334d048fb5b5e22477

The one that I have locally has the same hash (other 2 models also has same hash as HugginFace's) -
3f750246fa220529323b85a8905f9b49c0e5d427099185334d048fb5b5e22477 *ColorizeArtistic_gen.pth

but it's different from the hash you've posted above (I think your's is truncated somehow).

Yes, he has been phased out. This way, hash is correct. You can try using other black and white images to test it

Your question seems like this, you can try deleting the demo model directory and restarting it to download the model again