[ERROR] Model initialization failed: The pyramid_flux does not support high resolution now, we will release it after finishing training. You can modify the model_name to pyramid_mmdit to support 768p version generation
Opened this issue · 1 comments
Giribot commented
Hello !
Thanks you for uploading the 768p !
But i have an error (when i try to generate a 768p video) after updating all of "Pyramid Flow Video Generation Demo":
by launching this (it"s work well with the low format but it"s bogus in 768p)
D:\Data\Packages\Pyramid-Flow>python app.py
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.4.0+cu121 with CUDA 1201 (you have 2.4.0+cu118)
Python 3.10.11 (you have 3.10.6)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
A matching Triton is not available, some optimizations will not be enabled
Traceback (most recent call last):
File "C:\Users\gilda\AppData\Local\Programs\Python\Python310\lib\site-packages\xformers\__init__.py", line 57, in _is_triton_available
import triton # noqa
File "C:\Users\gilda\AppData\Local\Programs\Python\Python310\lib\site-packages\triton\__init__.py", line 13, in <module>
from . import language
File "C:\Users\gilda\AppData\Local\Programs\Python\Python310\lib\site-packages\triton\language\__init__.py", line 2, in <module>
from . import core, extern, libdevice, random
File "C:\Users\gilda\AppData\Local\Programs\Python\Python310\lib\site-packages\triton\language\core.py", line 1141, in <module>
def abs(x):
File "C:\Users\gilda\AppData\Local\Programs\Python\Python310\lib\site-packages\triton\runtime\jit.py", line 386, in jit
return JITFunction(args[0], **kwargs)
File "C:\Users\gilda\AppData\Local\Programs\Python\Python310\lib\site-packages\triton\runtime\jit.py", line 315, in __init__
self.run = self._make_launcher()
File "C:\Users\gilda\AppData\Local\Programs\Python\Python310\lib\site-packages\triton\runtime\jit.py", line 282, in _make_launcher
scope = {"version_key": version_key(), "get_cuda_stream": get_cuda_stream,
File "C:\Users\gilda\AppData\Local\Programs\Python\Python310\lib\site-packages\triton\runtime\jit.py", line 82, in version_key
with open(triton._C.libtriton.__file__, "rb") as f:
AttributeError: partially initialized module 'triton' has no attribute '_C' (most likely due to a circular import)
[WARNING] Required file 'config.json' missing in 'D:\Data\Packages\Pyramid-Flow\pyramid_flow_model\diffusion_transformer_768p'.
[INFO] Downloading model from 'rain1011/pyramid-flow-miniflux' to 'D:\Data\Packages\Pyramid-Flow\pyramid_flow_model'...
Fetching 24 files: 0%| | 0/24 [00:00<?, ?it/s]C:\Users\gilda\AppData\Local\Programs\Python\Python310\lib\site-packages\huggingface_hub\file_download.py:834: UserWarning: `local_dir_use_symlinks` parameter is deprecated and will be ignored. The process to download files to a local folder has been updated and do not rely on symlinks anymore. You only need to pass a destination folder as`local_dir`.
For more details, check out https://huggingface.co/docs/huggingface_hub/main/en/guides/download#download-files-to-local-folder.
warnings.warn(
diffusion_transformer_768p/config.json: 100%|█████████████████████████████████████████████████| 465/465 [00:00<?, ?B/s]
Fetching 24 files: 4%|██▋ | 1/24 [00:02<00:54, 2.36s/it]Error while downloading from https://cdn-lfs-us-1.hf.co/repos/e9/1b/e91b157029b31632e24bd2951aba461f4d3309bc116d5c5a2cae148af6292c87/864de0e1afd9dd2c373d957ac2c54346f5006036dc7aa8ec7605db80eea2272c?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27diffusion_pytorch_model.safetensors%3B+filename%3D%22diffusion_pytorch_model.safetensors%22%3B&Expires=1731770288&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTczMTc3MDI4OH19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmhmLmNvL3JlcG9zL2U5LzFiL2U5MWIxNTcwMjliMzE2MzJlMjRiZDI5NTFhYmE0NjFmNGQzMzA5YmMxMTZkNWM1YTJjYWUxNDhhZjYyOTJjODcvODY0ZGUwZTFhZmQ5ZGQyYzM3M2Q5NTdhYzJjNTQzNDZmNTAwNjAzNmRjN2FhOGVjNzYwNWRiODBlZWEyMjcyYz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSoifV19&Signature=NswlavkJyuzxeXYeSvVhDbqDfM83d1ovglNezbDJmykyJXK3D6A01-Ctpu8X4jCl4w63TyrOAo-PfPD7BHbYe-34CRT6-xHqcfA6%7EMv034hEwAXavYvnk2N9itOhZgDNNjmqDpdDz3IPPA1d8r%7Ep-hgVt4UTKHa7srRAlOoA8Wt6mNXZljCuTMB-pFNl58ZUK-NgPIr7R5uk5bGv2de1v%7EhU9T-I%7EAHp-dYDW5wIBHEp90Ot1AuHdPULrVC0fOIf2FhKjnXCbAOqDAo%7ECd%7EPEzeu7e6nS7sAQGeW94wsXgJAxYrT3fqve-4hbz1iP5Ypqh9v2VVu%7E2ATSc-8KvJJmg__&Key-Pair-Id=K24J24Z295AEI9: HTTPSConnectionPool(host='cdn-lfs-us-1.hf.co', port=443): Read timed out.
Trying to resume download...
Error while downloading from https://cdn-lfs-us-1.hf.co/repos/e9/1b/e91b157029b31632e24bd2951aba461f4d3309bc116d5c5a2cae148af6292c87/864de0e1afd9dd2c373d957ac2c54346f5006036dc7aa8ec7605db80eea2272c?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27diffusion_pytorch_model.safetensors%3B+filename%3D%22diffusion_pytorch_model.safetensors%22%3B&Expires=1731770288&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTczMTc3MDI4OH19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmhmLmNvL3JlcG9zL2U5LzFiL2U5MWIxNTcwMjliMzE2MzJlMjRiZDI5NTFhYmE0NjFmNGQzMzA5YmMxMTZkNWM1YTJjYWUxNDhhZjYyOTJjODcvODY0ZGUwZTFhZmQ5ZGQyYzM3M2Q5NTdhYzJjNTQzNDZmNTAwNjAzNmRjN2FhOGVjNzYwNWRiODBlZWEyMjcyYz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSoifV19&Signature=NswlavkJyuzxeXYeSvVhDbqDfM83d1ovglNezbDJmykyJXK3D6A01-Ctpu8X4jCl4w63TyrOAo-PfPD7BHbYe-34CRT6-xHqcfA6%7EMv034hEwAXavYvnk2N9itOhZgDNNjmqDpdDz3IPPA1d8r%7Ep-hgVt4UTKHa7srRAlOoA8Wt6mNXZljCuTMB-pFNl58ZUK-NgPIr7R5uk5bGv2de1v%7EhU9T-I%7EAHp-dYDW5wIBHEp90Ot1AuHdPULrVC0fOIf2FhKjnXCbAOqDAo%7ECd%7EPEzeu7e6nS7sAQGeW94wsXgJAxYrT3fqve-4hbz1iP5Ypqh9v2VVu%7E2ATSc-8KvJJmg__&Key-Pair-Id=K24J24Z295AEI9: HTTPSConnectionPool(host='cdn-lfs-us-1.hf.co', port=443): Read timed out.
Trying to resume download...
diffusion_pytorch_model.safetensors: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7.89G/7.89G [18:58<00:00, 5.21MB/s]
diffusion_pytorch_model.safetensors: 25%|█████████▍ | 1.96G/7.89G [24:37<1:15:38, 1.31MB/s]
diffusion_pytorch_model.safetensors: 0%|▏ | 31.5M/7.89G [24:57<103:53:07, 21.0kB/s]
Fetching 24 files: 100%|███████████████████████████████████████████████████████████████| 24/24 [25:03<00:00, 62.63s/it]████████████████████████████████████████████████████| 7.89G/7.89G [18:58<00:00, 7.98MB/s]
[INFO] Model download complete.
C:\Users\gilda\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\helpers.py:147: UserWarning: In future versions of Gradio, the `cache_examples` parameter will no longer accept a value of 'lazy'. To enable lazy caching in Gradio, you should set `cache_examples=True`, and `cache_mode='lazy'` instead.
warnings.warn(
Will cache examples in 'D:\Data\Packages\Pyramid-Flow\.gradio\cached_examples\16' directory at first use. If method or examples have changed since last caching, delete this folder to reset cache.
Will cache examples in 'D:\Data\Packages\Pyramid-Flow\.gradio\cached_examples\28' directory at first use.
Information : impossible de trouver des fichiers pour le(s) modèle(s) spécifié(s).
* Running on local URL: http://127.0.0.1:7861
Could not create share link. Please check your internet connection or our status page: https://status.gradio.app.
[DEBUG] generate_text_to_video called.
[INFO] Initializing model with variant='768p', using bf16 precision...
[DEBUG] Model base path: D:\Data\Packages\Pyramid-Flow\pyramid_flow_model
[ERROR] Error initializing model: The pyramid_flux does not support high resolution now, we will release it after finishing training. You can modify the model_name to pyramid_mmdit to support 768p version generation
[ERROR] Model initialization failed: The pyramid_flux does not support high resolution now, we will release it after finishing training. You can modify the model_name to pyramid_mmdit to support 768p version generation
How i could fix this ?
Thanks you !
jy0205 commented
You can update the code with the latest commit