benrugg/AI-Render

An occasional crash that occurs when I render

Closed this issue · 1 comments

Describe the bug

Image is not produced.

To reproduce

Not sure how to reproduce other then is seems to occur when the image similarity is 0.2 or below.

Error log

venv "C:\Users\Tom\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: 5ab7f213bec2f816f9c5644becb32eb72c8ffb89
Installing requirements
Launching Web UI with arguments: --api --xformers
Loading weights [6ce0161689] from C:\Users\Tom\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: C:\Users\Tom\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying xformers cross attention optimization.
Textual inversion embeddings loaded(0):
Model loaded in 3.3s (load weights from disk: 0.2s, create model: 0.4s, apply weights to model: 0.6s, apply half(): 0.6s, move model to device: 0.5s, load textual inversion embeddings: 1.0s).
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Startup time: 9.3s (import torch: 1.5s, import gradio: 0.9s, import ldm: 0.5s, other imports: 0.8s, load scripts: 1.1s, load SD checkpoint: 3.6s, create ui: 0.6s, gradio launch: 0.1s, scripts app_started_callback: 0.1s).
0%| | 0/25 [00:19<?, ?it/s]
API error: POST: http://127.0.0.1:7860/sdapi/v1/img2img {'error': 'NansException', 'detail': '', 'body': '', 'errors': 'A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.'}
Traceback (most recent call last):
File "C:\Users\Tom\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 94, in receive
return self.receive_nowait()
File "C:\Users\Tom\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 89, in receive_nowait
raise WouldBlock
anyio.WouldBlock

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\Tom\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 78, in call_next
message = await recv_stream.receive()
File "C:\Users\Tom\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 114, in receive
raise EndOfStream
anyio.EndOfStream

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\Tom\stable-diffusion-webui\modules\api\api.py", line 144, in exception_handling
return await call_next(request)
File "C:\Users\Tom\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 84, in call_next
raise app_exc
File "C:\Users\Tom\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 70, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "C:\Users\Tom\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 108, in call
response = await self.dispatch_func(request, call_next)
File "C:\Users\Tom\stable-diffusion-webui\modules\api\api.py", line 109, in log_and_time
res: Response = await call_next(req)
File "C:\Users\Tom\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 84, in call_next
raise app_exc
File "C:\Users\Tom\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 70, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "C:\Users\Tom\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in call
await responder(scope, receive, send)
File "C:\Users\Tom\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 44, in call
await self.app(scope, receive, self.send_with_gzip)
File "C:\Users\Tom\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in call
raise exc
File "C:\Users\Tom\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in call
await self.app(scope, receive, sender)
File "C:\Users\Tom\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in call
raise e
File "C:\Users\Tom\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call
await self.app(scope, receive, send)
File "C:\Users\Tom\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 718, in call
await route.handle(scope, receive, send)
File "C:\Users\Tom\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 276, in handle
await self.app(scope, receive, send)
File "C:\Users\Tom\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 66, in app
response = await func(request)
File "C:\Users\Tom\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 237, in app
raw_response = await run_endpoint_function(
File "C:\Users\Tom\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 165, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
File "C:\Users\Tom\stable-diffusion-webui\venv\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
return await anyio.to_thread.run_sync(func, *args)
File "C:\Users\Tom\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\Tom\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\Users\Tom\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\Users\Tom\stable-diffusion-webui\modules\api\api.py", line 375, in img2imgapi
processed = process_images(p)
File "C:\Users\Tom\stable-diffusion-webui\modules\processing.py", line 515, in process_images
res = process_images_inner(p)
File "C:\Users\Tom\stable-diffusion-webui\modules\processing.py", line 669, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "C:\Users\Tom\stable-diffusion-webui\modules\processing.py", line 1115, in sample
samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
File "C:\Users\Tom\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 350, in sample_img2img
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "C:\Users\Tom\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 251, in launch_sampling
return func()
File "C:\Users\Tom\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 350, in
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "C:\Users\Tom\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Tom\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "C:\Users\Tom\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Tom\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 167, in forward
devices.test_for_nans(x_out, "unet")
File "C:\Users\Tom\stable-diffusion-webui\modules\devices.py", line 156, in test_for_nans
raise NansException(message)
modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.

Environment

  • Blender version (upper right corner of splash screen):
  • AI Render version (find in Preferences > Add-ons):
  • Operating system (Windows/Mac/Linux):
    version: 3.3.1, branch: master, commit date: 2022-10-04 18:35, hash: b292cfe5a936, type: release
    build date: 2022-10-05, 00:49:25
    platform: 'Windows-10-10.0.22621-SP0'

0.7.8

Device name MSI
Processor Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz 2.60 GHz
Installed RAM 32.0 GB (31.8 GB usable)
Device ID AB93B167-C093-42FF-98F9-51140450A7AB
Product ID 00325-81428-42625-AAOEM
System type 64-bit operating system, x64-based processor
Pen and touch No pen or touch input is available for this display

Screenshots/video

No response

Additional information

No response

This is also something to submit to Automatic1111: https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues