podgorskiy/ALAE

Weird figure reconstruction results for newly trained model

5agado opened this issue · 6 comments

I trained a new model on a personal footwear dataset. Sample results from training looked good, but when I run a make_figures script, I obtained this weird, oversaturated/false-colors results.

reconstructions_0

Any idea what's happening?

did you train it from scratch or using finetuning? what was your dataset size?

Trained from scratch, ~100k images.
The weird thing is that the samples during training are good

sample_129_0

what are the steps to train a model for custom dataset?

uhiu commented

hi, I also encountered this problem. Had you solved it?
image

uhiu commented

I think I find the reason. For me, it's because I stop the training at LOD=5, where the final LOD should be 6. So I should adjust the code in the demo python file like

# Z, _ = model.encode(x, layer_count - 1, 1)
Z, _ = model.encode(x, layer_count - 2, 1)
# cause the layer_cout=7 in the config file, and I want it to be 5, not 6

Accordingly, we should also adjust the decoder part

model.decoder(x, 5, 1, noise=True)

hope it helps. stay safe

Yes, what @uhiu says seems to be the most likely cause.
When training on custom data, make sure that the final LOD is consistent everywhere.

The first thing to check is the config. There are two parameters:

for example from bedroom:

DATASET.MAX_RESOLUTION_LEVEL: 8 this means that it will train up to 2**8 resolution (256)
MODEL.LAYER_COUNT: 7 this means that the network will have 7 blocks. We start from 4x4 and each block doubles the resolution except for the first one. This means the final output will be 4 * 2 ** (7 - 1), which is 256.

Basically, if you want resolution 2**x, then you should set DATASET.MAX_RESOLUTION_LEVEL: x and MODEL.LAYER_COUNT: x-1

@5agado,
Seems that the very last layer has weights with random initialization.