podgorskiy/ALAE

Training fails to initialize

MichalZajac opened this issue · 3 comments

Hi,
Could you please help with startup of training?

After the start of training script I get this output:

_2020-07-06 19:11:33,163 logger INFO: Namespace(config_file='configs\7359-frackles.yaml', opts=[])
2020-07-06 19:11:33,163 logger INFO: World size: 1
2020-07-06 19:11:33,163 logger INFO: Loaded configuration file configs\7359-frackles.yaml
2020-07-06 19:11:33,163 logger INFO:
NAME: 7359-frackles-test
PPL_CELEBA_ADJUSTMENT: True
DATASET:
PART_COUNT: 16
SIZE: 20000
SIZE_TEST: 49000-20000
PATH: M:/dev/ALAE/project/ALAE-master/data/datasets/frackleLeft_20200108_x128color-dataset/frackleLeft_20200108_x128-dataset-r%02d.tfrecords.%03d
PATH_TEST: M:/dev/ALAE/project/ALAE-master/data/datasets/frackleLeft_20200108_x128color-dataset/frackleLeft_20200108_x128-dataset-r%02d.tfrecords.%03d
MAX_RESOLUTION_LEVEL: 7
STYLE_MIX_PATH: style_mixing/test_images/set_celeba
MODEL:
LATENT_SPACE_SIZE: 256
LAYER_COUNT: 6
MAX_CHANNEL_COUNT: 256
START_CHANNEL_COUNT: 64
DLATENT_AVG_BETA: 0.995
MAPPING_LAYERS: 8
OUTPUT_DIR: training_artifacts/7359-frackles-test
TRAIN:
BASE_LEARNING_RATE: 0.002
EPOCHS_PER_LOD: 6
LEARNING_DECAY_RATE: 0.1
LEARNING_DECAY_STEPS: []
TRAIN_EPOCHS: 80

4 8 16 32 64 128 256 512 1024

LOD_2_BATCH_8GPU: [512, 256, 128, 64, 32, 32, 32, 32, 32]
LOD_2_BATCH_4GPU: [512, 256, 128, 64, 32, 32, 32, 32, 16]
LOD_2_BATCH_2GPU: [128, 128, 128, 64, 32, 32, 16]
LOD_2_BATCH_1GPU: [128, 128, 128, 64, 32, 16]

LEARNING_RATES: [0.0015, 0.0015, 0.0015, 0.0015, 0.0015, 0.0015, 0.002, 0.003, 0.003]

2020-07-06 19:11:33,164 logger INFO: Running with config:
DATASET:
FFHQ_SOURCE: /data/datasets/ffhq-dataset/tfrecords/ffhq/ffhq-r%02d.tfrecords
FLIP_IMAGES: True
MAX_RESOLUTION_LEVEL: 7
PART_COUNT: 16
PART_COUNT_TEST: 1
PATH: M:/dev/ALAE/project/ALAE-master/data/datasets/frackleLeft_20200108_x128color-dataset/frackleLeft_20200108_x128-dataset-r%02d.tfrecords.%03d
PATH_TEST: M:/dev/ALAE/project/ALAE-master/data/datasets/frackleLeft_20200108_x128color-dataset/frackleLeft_20200108_x128-dataset-r%02d.tfrecords.%03d
SAMPLES_PATH: dataset_samples/faces/realign128x128
SIZE: 20000
SIZE_TEST: 29000
STYLE_MIX_PATH: style_mixing/test_images/set_celeba
MODEL:
CHANNELS: 3
DLATENT_AVG_BETA: 0.995
ENCODER: EncoderDefault
GENERATOR: GeneratorDefault
LATENT_SPACE_SIZE: 256
LAYER_COUNT: 6
MAPPING_FROM_LATENT: MappingFromLatent
MAPPING_LAYERS: 8
MAPPING_TO_LATENT: MappingToLatent
MAX_CHANNEL_COUNT: 256
START_CHANNEL_COUNT: 64
STYLE_MIXING_PROB: 0.9
TRUNCATIOM_CUTOFF: 8
TRUNCATIOM_PSI: 0.7
Z_REGRESSION: False
NAME: 7359-frackles-test
OUTPUT_DIR: training_artifacts/7359-frackles-test
PPL_CELEBA_ADJUSTMENT: True
TRAIN:
ADAM_BETA_0: 0.0
ADAM_BETA_1: 0.99
BASE_LEARNING_RATE: 0.002
EPOCHS_PER_LOD: 6
LEARNING_DECAY_RATE: 0.1
LEARNING_DECAY_STEPS: []
LEARNING_RATES: [0.0015, 0.0015, 0.0015, 0.0015, 0.0015, 0.0015, 0.002, 0.003, 0.003]
LOD_2_BATCH_1GPU: [128, 128, 128, 64, 32, 16]
LOD_2_BATCH_2GPU: [128, 128, 128, 64, 32, 32, 16]
LOD_2_BATCH_4GPU: [512, 256, 128, 64, 32, 32, 32, 32, 16]
LOD_2_BATCH_8GPU: [512, 256, 128, 64, 32, 32, 32, 32, 32]
REPORT_FREQ: [100, 80, 60, 30, 20, 10, 10, 5, 5]
SNAPSHOT_FREQ: [300, 300, 300, 100, 50, 30, 20, 20, 10]
TRAIN_EPOCHS: 80
Running on GeForce RTX 2080 Ti
2020-07-06 19:11:35,057 logger INFO: Trainable parameters generator:
2020-07-06 19:11:35,059 logger INFO: Trainable parameters discriminator:
2020-07-06 19:11:35,062 logger INFO: No checkpoint found. Initializing model from scratch
2020-07-06 19:11:35,062 logger INFO: Starting from epoch: 0
2020-07-06 19:11:35,116 logger INFO: ################################################################################
2020-07-06 19:11:35,117 logger INFO: # Switching LOD to 0
2020-07-06 19:11:35,117 logger INFO: # Starting transition
2020-07-06 19:11:35,117 logger INFO: ################################################################################
2020-07-06 19:11:35,117 logger INFO: ################################################################################
2020-07-06 19:11:35,117 logger INFO: # Transition ended
2020-07-06 19:11:35,117 logger INFO: ################################################################################
2020-07-06 19:11:35,119 logger INFO: Batch size: 128, Batch size per GPU: 128, LOD: 0 - 4x4, blend: 1.000, dataset size: 20000
Backend TkAgg is interactive backend. Turning interactive mode on._

Process finished with exit code -1073741819 (0xC0000005)

When debugging in PyCharm I have found that the error occures on line 74 in data_loader.py when calling b = next(yielder)

But since I have a very little experience in debugging python I would be glad if you know what might me a problem.

Thank you very much in advance.

Same Error

@MagicalForcee

Please update dareblopy, pip install dareblopy --upgrade the newer version will be more verbose in case of errors.

Other than that, it's hard to say. I did not try to train on windows. It should be possible, but there might be nuances.

Please update dareblopy and post here if you still have an issue.