kohya-ss/sd-scripts

ERROR for FLUX lora training with finetune

Closed this issue · 0 comments

I prepare dataset json file with the script merge_captions_to_metadata.py and run train_network.py.
However, I got the following error

2024-10-17 15:00:51 INFO     prepare tokenizer                                                                                                                                      train_util.py:4227
                    INFO     Loading dataset config from huayansi.json                                                                                                            train_network.py:161
                    ERROR    Invalid user config / ユーザ設定の形式が正しくないようです                                                                                             config_util.py:368
Traceback (most recent call last):
  File "/home/work/sd-scripts/train_network.py", line 1115, in <module>
    trainer.train(args)
  File "/home/work/sd-scripts/train_network.py", line 197, in train
    blueprint = blueprint_generator.generate(user_config, args, tokenizer=tokenizer)
  File "/home/work/sd-scripts/library/config_util.py", line 402, in generate
    sanitized_user_config = self.sanitizer.sanitize_user_config(user_config)
  File "/home/work/sd-scripts/library/config_util.py", line 365, in sanitize_user_config
    return self.user_config_validator(user_config)
  File "/root/anaconda3/envs/sd-scripts/lib/python3.10/site-packages/voluptuous/schema_builder.py", line 272, in __call__
    return self._compiled([], data)
  File "/root/anaconda3/envs/sd-scripts/lib/python3.10/site-packages/voluptuous/schema_builder.py", line 595, in validate_dict
    return base_validate(path, iteritems(data), out)
  File "/root/anaconda3/envs/sd-scripts/lib/python3.10/site-packages/voluptuous/schema_builder.py", line 433, in validate_mapping
    raise er.MultipleInvalid(errors)
voluptuous.error.MultipleInvalid: extra keys not allowed @ data['/home/work/xllora_data/huayansi/inputs/1.jpg']

Here is my dataset toml

[general]
shuffle_caption = true
keep_tokens = 1

[[datasets]]
resolution = 1536
batch_size = 1

  [[datasets.subsets]]
  image_dir = '/home/work/xllora_data/huayansi/inputs'
  metadata_file = '/home/work/sd-scripts/huayansi.json'

Can you tell me how to do with this?