JoePenna/Dreambooth-Stable-Diffusion

Error when multi gpus: AttributeError: 'CLIPTextEmbeddings' object has no attribute 'embedding_forward'

sweetyshots123 opened this issue · 4 comments

got an error when executing main.py file using --gpus 0,1 option. No errors thrown when only using one gpu --gpus 0 or --gpus 1

!python "main.py" \
 --base configs/stable-diffusion/v1-finetune_unfrozen.yaml \
 -t \
 --actual_resume "model.ckpt" \
 --reg_data_root {reg_data_root} \
 -n {project_name} \
 --gpus 0,1 \
 --data_root "/workspace/Dreambooth-Stable-Diffusion/training_samples" \
 --max_training_steps {max_training_steps} \
 --class_word {class_word} \
 --no-test

output:

[... all large output before]
/venv/lib/python3.8/site-packages/pytorch_lightning/loggers/test_tube.py:105: LightningDeprecationWarning: The TestTubeLogger is deprecated since v1.5 and will be removed in v1.7. We recommend switching to the `pytorch_lightning.loggers.TensorBoardLogger` as an alternative.
  rank_zero_deprecation(
Monitoring val/loss_simple_ema as checkpoint metric.
Merged modelckpt-cfg: 
{'target': 'pytorch_lightning.callbacks.ModelCheckpoint', 'params': {'dirpath': 'logs/training_samples2022-10-02T00-51-23_tereska/checkpoints', 'filename': '{epoch:06}', 'verbose': True, 'save_last': True, 'monitor': 'val/loss_simple_ema', 'save_top_k': 1, 'every_n_train_steps': 500}}
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
#### Data #####
train, PersonalizedBase, 1200
reg, PersonalizedBase, 15000
validation, PersonalizedBase, 12
accumulate_grad_batches = 1
++++ NOT USING LR SCALING ++++
Setting learning rate to 1.00e-06
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/venv/lib/python3.8/multiprocessing/spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "/venv/lib/python3.8/multiprocessing/spawn.py", line 126, in _main
    self = reduction.pickle.load(from_parent)
  File "/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1185, in __getattr__
    raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'CLIPTextEmbeddings' object has no attribute 'embedding_forward'

i tried this also. I just don't think it supports it yet; like it was a placeholder really only meant for 1 gpu atm.

I didn't read too much into it though, I could very well be wrong here.

Same problem, any updates?

Same problem, any updates?

Same problem, any updates?