ForserX/StableDiffusionUI

Error while importing model

Sinestessia opened this issue · 5 comments

  1. When importing a model UI is confusing, no progress bar, import button doesnt grey out, etc...

  2. After a while i got this prompt:

======================= 0 NONE 0 NOTE 0 WARNING 1 ERROR ========================
File "${Workspace}\repo\onnx.venv\lib\site-packages\torch\onnx\utils.py", line 506, in export
ERROR: missing-standard-symbolic-function
_export(
=========================================
File "${Workspace}\repo\onnx.venv\lib\site-packages\torch\onnx\utils.py", line 1548, in _export
Exporting the operator 'aten::scaled_dot_product_attention' to ONNX opset version 14 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues.
graph, params_dict, torch_out = _model_to_graph(
None
File "${Workspace}\repo\onnx.venv\lib\site-packages\torch\onnx\utils.py", line 1117, in _model_to_graph

graph = _optimize_graph(
File "${Workspace}\repo\onnx.venv\lib\site-packages\torch\onnx\utils.py", line 665, in _optimize_graph
graph = _C._jit_pass_onnx(graph, operator_export_type)
File "${Workspace}\repo\onnx.venv\lib\site-packages\torch\onnx\utils.py", line 1901, in _run_symbolic_function
raise errors.UnsupportedOperatorError(
torch.onnx.errors.UnsupportedOperatorError: Exporting the operator 'aten::scaled_dot_product_attention' to ONNX opset version 14 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues.
${Workspace}\models\shark>exit

Thanks

Can u get a couple of lines above?

I seem to have found the reason. I need a couple of minutes. I love dependency updates.

Host started...

Name  -  Radeon RX Vega
DeviceID  -  VideoController1
AdapterRAM  -  4293918720
AdapterDACType  -  Internal DAC(400MHz)
Monochrome  -  False
DriverVersion  -  31.0.14051.1000
VideoProcessor  -  AMD Radeon Graphics Processor (0x687F)
VideoArchitecture  -  5
VideoMemoryType  -  2

 Startup extract ckpt(${Workspace}\models\Stable-diffusion\AnythingV5_v5PrtRE.safetensors)..... 

Microsoft Windows [Versi¢n 10.0.19045.2846]
(c) Microsoft Corporation. Todos los derechos reservados.
global_step key not found in model
Downloading (…)lve/main/config.json: 100%|##########| 4.52k/4.52k [00:00<?, ?B/s]
To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development
Downloading pytorch_model.bin: 100%|##########| 1.71G/1.71G [00:19<00:00, 86.1MB/s]
- This IS expected if you are initializing CLIPTextModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CLIPTextModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Downloading (…)lve/main/config.json: 100%|##########| 4.55k/4.55k [00:00<00:00, 4.54MB/s]
text_config_dict is provided which will be used to initialize CLIPTextConfig. The value text_config["id2label"] will be overriden.
Downloading (…)rocessor_config.json: 100%|##########| 342/342 [00:00<?, ?B/s] 
${Workspace}\repo\onnx.venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
SD: Done: Diffusion
${Workspace}\repo\onnx.venv\lib\site-packages\diffusers\models\cross_attention.py:30: FutureWarning: Importing from cross_attention is deprecated. Please import from diffusers.models.attention_processor instead.
  deprecate(
${Workspace}\repo\onnx.venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
  mask.fill_(torch.tensor(torch.finfo(dtype).min))
  if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):
  if causal_attention_mask.size() != (bsz, 1, tgt_len, src_len):
  if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):
${Workspace}\repo\onnx.venv\lib\site-packages\diffusers\models\cross_attention.py:51: FutureWarning: CrossAttnProcessor is deprecated and will be removed in 0.18.0. Please use `from diffusers.models.attention_processor import AttnProcessor instead.
  deprecate("cross_attention", "0.18.0", deprecation_message, standard_warn=False)
  if any(s % default_overall_up_factor != 0 for s in sample.shape[-2:]):
  assert hidden_states.shape[1] == self.channels
  assert hidden_states.shape[1] == self.channels
  assert hidden_states.shape[1] == self.channels
  if hidden_states.shape[0] >= 64:
  if not return_dict:
  if any(s % default_overall_up_factor != 0 for s in sample.shape[-2:]):
  if not return_dict:
  if not return_dict:
${Workspace}\repo\onnx.venv\lib\site-packages\torch\onnx\_internal\jit_utils.py:306: UserWarning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied. (Triggered internally at ..\torch\csrc\jit\passes\onnx\constant_fold.cpp:181.)
  _C._jit_pass_onnx_node_shape_type_inference(node, params_dict, opset_version)
${Workspace}\models\onnx/AnythingV5_v5PrtRE
============== Diagnostic Run torch.onnx.export version 2.0.0+cpu ==============
verbose: False, log level: Level.ERROR======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
Traceback (most recent call last):
============== Diagnostic Run torch.onnx.export version 2.0.0+cpu ==============
  File "${Workspace}\repo\diffusion_scripts\convert\convert_diffusers_to_onnx.py", line 375, in <module>
verbose: False, log level: Level.ERROR
    convert_models(args.model_path, args.output_path, args.opset, args.fp16)
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
  File "${Workspace}\repo\onnx.venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
============== Diagnostic Run torch.onnx.export version 2.0.0+cpu ==============
    return func(*args, **kwargs)
verbose: False, log level: Level.ERROR
  File "${Workspace}\repo\diffusion_scripts\convert\convert_diffusers_to_onnx.py", line 253, in convert_models
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
    onnx_export(
============== Diagnostic Run torch.onnx.export version 2.0.0+cpu ==============
  File "${Workspace}\repo\diffusion_scripts\convert\convert_diffusers_to_onnx.py", line 61, in onnx_export
verbose: False, log level: Level.ERROR
    export(
======================= 0 NONE 0 NOTE 0 WARNING 1 ERROR ========================
  File "${Workspace}\repo\onnx.venv\lib\site-packages\torch\onnx\utils.py", line 506, in export
ERROR: missing-standard-symbolic-function
    _export(
=========================================
  File "${Workspace}\repo\onnx.venv\lib\site-packages\torch\onnx\utils.py", line 1548, in _export
Exporting the operator 'aten::scaled_dot_product_attention' to ONNX opset version 14 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues.
    graph, params_dict, torch_out = _model_to_graph(
None
  File "${Workspace}\repo\onnx.venv\lib\site-packages\torch\onnx\utils.py", line 1117, in _model_to_graph
<Set verbose=True to see more details>
    graph = _optimize_graph(
  File "${Workspace}\repo\onnx.venv\lib\site-packages\torch\onnx\utils.py", line 665, in _optimize_graph
    graph = _C._jit_pass_onnx(graph, operator_export_type)
  File "${Workspace}\repo\onnx.venv\lib\site-packages\torch\onnx\utils.py", line 1901, in _run_symbolic_function
    raise errors.UnsupportedOperatorError(
torch.onnx.errors.UnsupportedOperatorError: Exporting the operator 'aten::scaled_dot_product_attention' to ONNX opset version 14 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues.
${Workspace}\models\shark>exit

Click to
image

And put command:
py -install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cpu
image

Press "Enter" and wait for install successful

Or download new release (XUI 3.1.4 Preview.2.7z)