camenduru/PhotoMaker-colab

PEFT backend is required for this method.

Opened this issue · 1 comments

/content
Cloning into 'PhotoMaker'...
remote: Enumerating objects: 64, done.
remote: Counting objects: 100% (14/14), done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 64 (delta 8), reused 7 (delta 7), pack-reused 50
Receiving objects: 100% (64/64), 7.22 MiB | 14.56 MiB/s, done.
Resolving deltas: 100% (12/12), done.
/content/PhotoMaker
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.3/2.3 GB 934.6 kB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.1/6.1 MB 98.2 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.4/4.4 MB 95.3 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.0/2.0 MB 89.2 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.6/4.6 MB 63.2 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 63.3/63.3 MB 26.4 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 161.0/161.0 kB 3.8 MB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... done
Building wheel for lit (pyproject.toml) ... done
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 109.1/109.1 MB 15.1 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.0/2.0 MB 86.1 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 290.1/290.1 kB 35.1 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 44.6/44.6 kB 6.4 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 192.1/192.1 MB 8.7 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 79.5/79.5 kB 11.6 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 46.0/46.0 kB 6.3 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 117.0/117.0 kB 17.0 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 86.8/86.8 kB 12.4 MB/s eta 0:00:00
Building wheel for antlr4-python3-runtime (setup.py) ... done
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling transformers.utils.move_cache().
 0/0 [00:00<?, ?it/s]
/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_token.py:88: UserWarning:
The secret HF_TOKEN does not exist in your Colab secrets.
To authenticate with the Hugging Face Hub, create a token in your settings tab (https://huggingface.co/settings/tokens), set it as secret in your Google Colab and restart your session.
You will be able to reuse this secret in all of your notebooks.
Please note that authentication is recommended but still optional to access public models or datasets.
warnings.warn(
photomaker-v1.bin: 100%
 934M/934M [00:01<00:00, 453MB/s]
model_index.json: 100%
 577/577 [00:00<00:00, 47.5kB/s]
Fetching 18 files: 100%
 18/18 [00:21<00:00,  1.44s/it]
model.fp16.safetensors: 100%
 1.39G/1.39G [00:07<00:00, 191MB/s]
model.fp16.safetensors: 100%
 246M/246M [00:00<00:00, 272MB/s]
text_encoder/config.json: 100%
 560/560 [00:00<00:00, 42.7kB/s]
tokenizer/merges.txt: 100%
 525k/525k [00:00<00:00, 19.4MB/s]
tokenizer/tokenizer_config.json: 100%
 737/737 [00:00<00:00, 15.3kB/s]
tokenizer/special_tokens_map.json: 100%
 472/472 [00:00<00:00, 14.1kB/s]
text_encoder_2/config.json: 100%
 570/570 [00:00<00:00, 10.4kB/s]
scheduler/scheduler_config.json: 100%
 474/474 [00:00<00:00, 13.4kB/s]
tokenizer/vocab.json: 100%
 1.06M/1.06M [00:00<00:00, 4.28MB/s]
tokenizer_2/special_tokens_map.json: 100%
 460/460 [00:00<00:00, 14.8kB/s]
tokenizer_2/tokenizer_config.json: 100%
 725/725 [00:00<00:00, 23.6kB/s]
diffusion_pytorch_model.fp16.safetensors: 100%
 5.14G/5.14G [00:20<00:00, 422MB/s]
diffusion_pytorch_model.fp16.safetensors: 100%
 167M/167M [00:00<00:00, 321MB/s]
tokenizer_2/vocab.json: 100%
 1.06M/1.06M [00:00<00:00, 4.36MB/s]
unet/config.json: 100%
 1.68k/1.68k [00:00<00:00, 105kB/s]
vae/config.json: 100%
 602/602 [00:00<00:00, 38.6kB/s]
Loading pipeline components...: 100%
 7/7 [00:03<00:00,  2.04it/s]
Loading PhotoMaker components [1] id_encoder from [/root/.cache/huggingface/hub/models--TencentARC--PhotoMaker/snapshots/d7ec3fc17290263135825194aeb3bc456da67cc5]...
Loading PhotoMaker components [2] lora_weights from [/root/.cache/huggingface/hub/models--TencentARC--PhotoMaker/snapshots/d7ec3fc17290263135825194aeb3bc456da67cc5]

ValueError Traceback (most recent call last)
in <cell line: 49>()
47
48 # Load PhotoMaker checkpoint
---> 49 pipe.load_photomaker_adapter(
50 os.path.dirname(photomaker_path),
51 subfolder="",

2 frames
/usr/local/lib/python3.10/dist-packages/diffusers/loaders/lora.py in load_lora_weights(self, pretrained_model_name_or_path_or_dict, adapter_name, **kwargs)
1228 """
1229 if not USE_PEFT_BACKEND:
-> 1230 raise ValueError("PEFT backend is required for this method.")
1231
1232 # We could have accessed the unet config from lora_state_dict() too. We pass

ValueError: PEFT backend is required for this method.

Screenshot 2024-03-25 064900

%cd /content
!git clone -b dev https://github.com/camenduru/PhotoMaker-hf
%cd /content/PhotoMaker-hf

!pip install -q torch==2.0.1+cu118 torchvision==0.15.2+cu118 torchaudio==2.0.2+cu118 torchtext==0.15.2 torchdata==0.6.1 --extra-index-url https://download.pytorch.org/whl/cu118 -U
!pip install -q xformers==0.0.20 diffusers accelerate einops onnxruntime-gpu omegaconf gradio
!pip install triton
!pip install peft

!python app.py