thalitadru/LDMnet-pytorch

unstable performance on CIFAR

Opened this issue · 0 comments

xqri commented

when I tried "python main.py with cifar train_size=1000 device=cuda dropout=0.5 alphaupdate.lambda_bar=0.01", the train loss rapidly increased and the program broke down. I found this can be avoided by decreasing "mu". However, after doing so, the accuracy first increased (to around 21%) but then decreased back to around 18.0% during the whole training process (500 epochs). Have you checked that?