Error registering modules: C:\actions-runner\w\SRT\SRT\c\runtime\src\iree\hal\drivers\vulkan\native_executable.cc:51:
trifkovic opened this issue · 0 comments
trifkovic commented
Win10
Vulkan Runtime+SDK
Drivers: https://www.amd.com/en/support/kb/release-notes/rn-rad-win-22-11-1-mlir-iree
shark_tank local cache is located at C:\Users\x\.local/shark_tank/ . You may change this by setting the --local_tank_cache= flag
transformers\utils\generic.py:311: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
torch.utils._pytree._register_pytree_node(
transformers\utils\generic.py:311: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
torch.utils._pytree._register_pytree_node(
diffusers\utils\outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
Clearing .mlir temporary files from a prior run. This may take some time...
Clearing .mlir temporary files took 0.0156 seconds.
gradio temporary image cache located at C:\ai2\shark_tmp/gradio. You may change this by setting the GRADIO_TEMP_DIR environment variable.
Clearing gradio UI temporary image files from a prior run. This may take some time...
Clearing gradio UI temporary image files took 0.0000 seconds.
vulkan devices are available.
metal devices are not available.
cuda devices are not available.
rocm devices are available.
shark_tank local cache is located at C:\Users\x\.local/shark_tank/ . You may change this by setting the --local_tank_cache= flag
transformers\utils\generic.py:311: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
torch.utils._pytree._register_pytree_node(
transformers\utils\generic.py:311: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
torch.utils._pytree._register_pytree_node(
diffusers\utils\outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
local-sync devices are available.
shark_tank local cache is located at C:\Users\x\.local/shark_tank/ . You may change this by setting the --local_tank_cache= flag
transformers\utils\generic.py:311: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
torch.utils._pytree._register_pytree_node(
transformers\utils\generic.py:311: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
torch.utils._pytree._register_pytree_node(
diffusers\utils\outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
local-task devices are available.
shark_tank local cache is located at C:\Users\x\.local/shark_tank/ . You may change this by setting the --local_tank_cache= flag
transformers\utils\generic.py:311: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
torch.utils._pytree._register_pytree_node(
transformers\utils\generic.py:311: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
torch.utils._pytree._register_pytree_node(
diffusers\utils\outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
ui\txt2img_ui.py:373: UserWarning: Settings.json file not found or 'txt2img' key is missing. Using default values for fields.
diffusers\utils\outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
{'cpu': ['Intel(R) Core(TM) i5-4460 CPU @ 3.20GHz => cpu-task'], 'cuda': [], 'vulkan': ['Radeon RX 580 Series => vulkan://0', 'Radeon RX 580 Series => vulkan://1'], 'rocm': ['Radeon RX 580 Series => rocm://0', 'Radeon RX 580 Series => rocm://1']}
Running on local URL: http://0.0.0.0:8080
shark_tank local cache is located at C:\Users\x\.local/shark_tank/ . You may change this by setting the --local_tank_cache= flag
transformers\utils\generic.py:311: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
torch.utils._pytree._register_pytree_node(
To create a public link, set `share=True` in `launch()`.
transformers\utils\generic.py:311: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
torch.utils._pytree._register_pytree_node(
diffusers\utils\outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
Found device Radeon RX 580 Series. Using target triple rdna3-unknown-windows.
Tuned models are currently not supported for this setting.
saving euler_scale_model_input_1_512_512_vulkan_fp16_torch_linalg.mlir to .\shark_tmp
loading existing vmfb from: C:\ai2\euler_scale_model_input_1_512_512_vulkan_fp16.vmfb
Loading module C:\ai2\euler_scale_model_input_1_512_512_vulkan_fp16.vmfb...
Compiling Vulkan shaders. This may take a few minutes.
saving euler_step_epsilon_1_512_512_vulkan_fp16_torch_linalg.mlir to .\shark_tmp
loading existing vmfb from: C:\ai2\euler_step_epsilon_1_512_512_vulkan_fp16.vmfb
Loading module C:\ai2\euler_step_epsilon_1_512_512_vulkan_fp16.vmfb...
Compiling Vulkan shaders. This may take a few minutes.
saving euler_a_scale_model_input_1_512_512_vulkan_fp16_torch_linalg.mlir to .\shark_tmp
loading existing vmfb from: C:\ai2\euler_a_scale_model_input_1_512_512_vulkan_fp16.vmfb
Loading module C:\ai2\euler_a_scale_model_input_1_512_512_vulkan_fp16.vmfb...
Compiling Vulkan shaders. This may take a few minutes.
saving euler_a_step_epsilon_1_512_512_vulkan_fp16_torch_linalg.mlir to .\shark_tmp
loading existing vmfb from: C:\ai2\euler_a_step_epsilon_1_512_512_vulkan_fp16.vmfb
Loading module C:\ai2\euler_a_step_epsilon_1_512_512_vulkan_fp16.vmfb...
Compiling Vulkan shaders. This may take a few minutes.
use_tuned? sharkify: False
Checkpoint already loaded at : C:/ai2/models/diffusers/epicphotogasm_amateurreallife
self.favored_base_models: ['stabilityai/stable-diffusion-2-1', 'CompVis/stable-diffusion-v1-4']
allowed_base_model_ids: ['stabilityai/stable-diffusion-2-1', 'CompVis/stable-diffusion-v1-4']
Loading module C:\ai2\unet_1_64_512_512_fp16_epicphotogasm_amateurreallife_vulkan.vmfb...
Compiling Vulkan shaders. This may take a few minutes.
Error registering modules: C:\actions-runner\w\SRT\SRT\c\runtime\src\iree\hal\drivers\vulkan\native_executable.cc:51: UNAVAILABLE; VK_ERROR_INITIALIZATION_FAILED; vkCreateShaderModule; while invoking native function hal.executable.create; while calling import;
[ 1] native hal.executable.create:0 -
[ 0] bytecode module@1:1324 -
Retrying with a different base model configuration, as stabilityai/stable-diffusion-2-1 did not work
Loading module C:\ai2\unet_1_64_512_512_fp16_epicphotogasm_amateurreallife_vulkan.vmfb...
Compiling Vulkan shaders. This may take a few minutes.
Error registering modules: C:\actions-runner\w\SRT\SRT\c\runtime\src\iree\hal\drivers\vulkan\native_executable.cc:51: UNAVAILABLE; VK_ERROR_INITIALIZATION_FAILED; vkCreateShaderModule; while invoking native function hal.executable.create; while calling import;
[ 1] native hal.executable.create:0 -
[ 0] bytecode module@1:1324 -
Retrying with a different base model configuration, as CompVis/stable-diffusion-v1-4 did not work
ERROR: Traceback (most recent call last):
File "asyncio\runners.py", line 190, in run
File "asyncio\runners.py", line 118, in run
File "asyncio\base_events.py", line 640, in run_until_complete
File "asyncio\windows_events.py", line 321, in run_forever
File "asyncio\base_events.py", line 607, in run_forever
File "asyncio\base_events.py", line 1922, in _run_once
File "asyncio\events.py", line 80, in _run
File "C:\Users\x\AppData\Local\Temp\_MEI108682\gradio\queueing.py", line 538, in process_events
response = await self.call_prediction(awake_events, batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\x\AppData\Local\Temp\_MEI108682\gradio\queueing.py", line 489, in call_prediction
output = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\x\AppData\Local\Temp\_MEI108682\gradio\route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\x\AppData\Local\Temp\_MEI108682\gradio\blocks.py", line 1561, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\x\AppData\Local\Temp\_MEI108682\gradio\blocks.py", line 1191, in call_function
prediction = await utils.async_iteration(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\x\AppData\Local\Temp\_MEI108682\gradio\utils.py", line 519, in async_iteration
return await iterator.__anext__()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\x\AppData\Local\Temp\_MEI108682\gradio\utils.py", line 512, in __anext__
return await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "anyio\to_thread.py", line 56, in run_sync
File "anyio\_backends\_asyncio.py", line 2134, in run_sync_in_worker_thread
File "anyio\_backends\_asyncio.py", line 851, in run
File "C:\Users\x\AppData\Local\Temp\_MEI108682\gradio\utils.py", line 495, in run_sync_iterator_async
return next(iterator)
^^^^^^^^^^^^^^
File "C:\Users\x\AppData\Local\Temp\_MEI108682\gradio\utils.py", line 666, in gen_wrapper
yield from f(*args, **kwargs)
File "ui\txt2img_ui.py", line 197, in txt2img_inf
File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py", line 733, in from_pretrained
File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_txt2img.py", line 55, in __init__
File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py", line 93, in __init__
File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py", line 158, in load_unet
File "apps\stable_diffusion\src\models\model_wrappers.py", line 1347, in unet
File "apps\stable_diffusion\src\models\model_wrappers.py", line 1342, in unet
File "apps\stable_diffusion\src\models\model_wrappers.py", line 71, in check_compilation
SystemExit: Could not compile Unet. Please create an issue with the detailed log at https://github.com/nod-ai/SHARK/issues
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "starlette\routing.py", line 747, in lifespan
File "uvicorn\lifespan\on.py", line 137, in receive
File "asyncio\queues.py", line 158, in get
asyncio.exceptions.CancelledError
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "asyncio\runners.py", line 190, in run
File "asyncio\runners.py", line 118, in run
File "asyncio\base_events.py", line 640, in run_until_complete
File "asyncio\windows_events.py", line 321, in run_forever
File "asyncio\base_events.py", line 607, in run_forever
File "asyncio\base_events.py", line 1922, in _run_once
File "asyncio\events.py", line 80, in _run
File "C:\Users\x\AppData\Local\Temp\_MEI108682\gradio\queueing.py", line 538, in process_events
response = await self.call_prediction(awake_events, batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\x\AppData\Local\Temp\_MEI108682\gradio\queueing.py", line 489, in call_prediction
output = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\x\AppData\Local\Temp\_MEI108682\gradio\route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\x\AppData\Local\Temp\_MEI108682\gradio\blocks.py", line 1561, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\x\AppData\Local\Temp\_MEI108682\gradio\blocks.py", line 1191, in call_function
prediction = await utils.async_iteration(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\x\AppData\Local\Temp\_MEI108682\gradio\utils.py", line 519, in async_iteration
return await iterator.__anext__()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\x\AppData\Local\Temp\_MEI108682\gradio\utils.py", line 512, in __anext__
return await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "anyio\to_thread.py", line 56, in run_sync
File "anyio\_backends\_asyncio.py", line 2134, in run_sync_in_worker_thread
File "anyio\_backends\_asyncio.py", line 851, in run
File "C:\Users\x\AppData\Local\Temp\_MEI108682\gradio\utils.py", line 495, in run_sync_iterator_async
return next(iterator)
^^^^^^^^^^^^^^
File "C:\Users\x\AppData\Local\Temp\_MEI108682\gradio\utils.py", line 666, in gen_wrapper
yield from f(*args, **kwargs)
File "ui\txt2img_ui.py", line 197, in txt2img_inf
File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py", line 733, in from_pretrained
File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_txt2img.py", line 55, in __init__
File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py", line 93, in __init__
File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py", line 158, in load_unet
File "apps\stable_diffusion\src\models\model_wrappers.py", line 1347, in unet
File "apps\stable_diffusion\src\models\model_wrappers.py", line 1342, in unet
File "apps\stable_diffusion\src\models\model_wrappers.py", line 71, in check_compilation
SystemExit: Could not compile Unet. Please create an issue with the detailed log at https://github.com/nod-ai/SHARK/issues
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "uvicorn\protocols\http\h11_impl.py", line 404, in run_asgi
File "uvicorn\middleware\proxy_headers.py", line 84, in __call__
File "fastapi\applications.py", line 1054, in __call__
File "starlette\applications.py", line 123, in __call__
File "starlette\middleware\errors.py", line 164, in __call__
File "starlette\middleware\cors.py", line 83, in __call__
File "starlette\middleware\exceptions.py", line 62, in __call__
File "starlette\_exception_handler.py", line 53, in wrapped_app
File "starlette\routing.py", line 762, in __call__
File "starlette\routing.py", line 782, in app
File "starlette\routing.py", line 297, in handle
File "starlette\routing.py", line 77, in app
File "starlette\_exception_handler.py", line 53, in wrapped_app
File "starlette\routing.py", line 75, in app
File "starlette\responses.py", line 261, in __call__
File "starlette\responses.py", line 257, in wrap
File "starlette\responses.py", line 234, in listen_for_disconnect
File "uvicorn\protocols\http\h11_impl.py", line 534, in receive
File "asyncio\locks.py", line 213, in wait
asyncio.exceptions.CancelledError