The kandinsky 2.2 model using the diffusers pipeline reports an error when generating the second image
klossm opened this issue · 1 comments
The kandinsky 2.2 model with diffusers pipeline generates the first image successfully, however it reports an error when generating another image again. Here are my settings:
Enable prior generation on CPU
Enable half precision weights
Enable sliced attention
Enable sequential CPU offload
Enable channels last memory format
Among them, "Enable channels last memory format" is also an error when it is turned off.
The code that reported the error:
task queued: text2img
Traceback (most recent call last):
File "M:\ai\kubin\venv\lib\site-packages\gradio\routes.py", line 439, in run_predict
output = await app.get_blocks().process_api(
File "M:\ai\kubin\venv\lib\site-packages\gradio\blocks.py", line 1384, in process_api
result = await self.call_function(
File "M:\ai\kubin\venv\lib\site-packages\gradio\blocks.py", line 1089, in call_function
prediction = await anyio.to_thread.run_sync(
File "M:\ai\kubin\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "M:\ai\kubin\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "M:\ai\kubin\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "M:\ai\kubin\venv\lib\site-packages\gradio\utils.py", line 700, in wrapper
response = f(*args, **kwargs)
File "M:\ai\kubin\src\ui_blocks\t2i.py", line 240, in generate
return generate_fn(params)
File "M:\ai\kubin\src\webui.py", line 34, in
generate_fn=lambda params: kubin.model.t2i(params),
File "M:\ai\kubin\src\models\model_diffusers22\model_22.py", line 120, in t2i
prior, decoder = self.prepareModel("text2img")
File "M:\ai\kubin\src\models\model_diffusers22\model_22.py", line 76, in prepareModel
prior, decoder = prepare_weights_for_task(self, task)
File "M:\ai\kubin\src\models\model_diffusers22\model_22_utils.py", line 137, in prepare_weights_for_task
to_device(model.params, current_prior, current_decoder)
File "M:\ai\kubin\src\models\model_diffusers22\model_22_utils.py", line 252, in to_device
prior.to(prior_device)
File "M:\ai\kubin\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 682, in to
module.to(torch_device, torch_dtype)
File "M:\ai\kubin\venv\lib\site-packages\transformers\modeling_utils.py", line 1902, in to
return super().to(*args, **kwargs)
File "M:\ai\kubin\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "M:\ai\kubin\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "M:\ai\kubin\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "M:\ai\kubin\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "M:\ai\kubin\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "M:\ai\kubin\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!
Thanks, I will investigate this. Perhaps "Run prior on CPU" conflicts with some of the other flags.