fine tune model error
Opened this issue · 2 comments
accelerate launch diff_train.py
--pretrained_model_name_or_path stabilityai/stable-diffusion-2-1
--instance_data_dir train/images_large
--resolution=256 --gradient_accumulation_steps=1 --center_crop --random_flip
--learning_rate=5e-6 --lr_scheduler constant_with_warmup
--lr_warmup_steps=5000 --max_train_steps=100000
--train_batch_size=16 --save_steps=10000 --modelsavesteps 20000 --duplication nodup
--output_dir=output --class_prompt classlevel --instance_prompt_loc miscdata/laion_combined_captions.json
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError:
I use the train/images_large in the LAION-10k split here, may I ask where is wrong? Thanks for your reply.
This error is not helpful, please post the entire log. Thanks!
Thanks for you reply, The bash command I used:
accelerate launch diff_train.py \ --pretrained_model_name_or_path stabilityai/stable-diffusion-2-1 \ --instance_data_dir train \ --resolution=256 --gradient_accumulation_steps=1 --center_crop --random_flip \ --learning_rate=5e-6 --lr_scheduler constant_with_warmup \ --lr_warmup_steps=5000 --max_train_steps=100000 \ --train_batch_size=16 --save_steps=10000 --modelsavesteps 20000 --duplication nodup \ --output_dir=output --class_prompt classlevel --instance_prompt_loc miscdata/laion_combined_captions.json
And the error I met:
The following values were not passed to
accelerate launchand had defaults used instead:
--num_processeswas set to a value of
1
--num_machineswas set to a value of
1
--mixed_precisionwas set to a value of
'no'
--dynamo_backendwas set to a value of
'no'To avoid this warning pass in values for each of the problematic parameters or run
accelerate config. /home/hangyi/anaconda3/envs/diffrep/lib/python3.9/site-packages/accelerate/accelerator.py:231: FutureWarning:
logging_diris deprecated and will be removed in version 0.18.0 of 🤗 Accelerate. Use
project_dir instead. warnings.warn( You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors. Traceback (most recent call last): File "/home/hangyi/Documents/GitHub/DCR/diff_train.py", line 764, in <module> main(args) File "/home/hangyi/Documents/GitHub/DCR/diff_train.py", line 462, in main temp = list(train_dataset.prompts.values()) AttributeError: 'ObjectAttributeDataset' object has no attribute 'prompts' Traceback (most recent call last): File "/home/hangyi/anaconda3/envs/diffrep/bin/accelerate", line 8, in <module> sys.exit(main()) File "/home/hangyi/anaconda3/envs/diffrep/lib/python3.9/site-packages/accelerate/commands/accelerate_cli.py", line 45, in main args.func(args) File "/home/hangyi/anaconda3/envs/diffrep/lib/python3.9/site-packages/accelerate/commands/launch.py", line 1097, in launch_command simple_launcher(args) File "/home/hangyi/anaconda3/envs/diffrep/lib/python3.9/site-packages/accelerate/commands/launch.py", line 552, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command '['/home/hangyi/anaconda3/envs/diffrep/bin/python', 'diff_train.py', '--pretrained_model_name_or_path', 'stabilityai/stable-diffusion-2-1', '--instance_data_dir', 'train', '--resolution=256', '--gradient_accumulation_steps=1', '--center_crop', '--random_flip', '--learning_rate=5e-6', '--lr_scheduler', 'constant_with_warmup', '--lr_warmup_steps=5000', '--max_train_steps=100000', '--train_batch_size=16', '--save_steps=10000', '--modelsavesteps', '20000', '--duplication', 'nodup', '--output_dir=output', '--class_prompt', 'classlevel', '--instance_prompt_loc', 'miscdata/laion_combined_captions.json']' returned non-zero exit status 1
The error:
Traceback (most recent call last):
File "/home/hangyi/Documents/GitHub/DCR/diff_train.py", line 764, in
main(args)
File "/home/hangyi/Documents/GitHub/DCR/diff_train.py", line 462, in main
temp = list(train_dataset.prompts.values())
AttributeError: 'ObjectAttributeDataset' object has no attribute 'prompts'
hope it would be helpful.