Zero'd out training
Closed this issue · 3 comments
spadavec commented
Hello,
I've essentially copied your tybalt_vae.ipynb file line for line in a new file to train on some of my own data, but after training on my dataset, it appears the training isn't occurring at all:
- The plot of the loss over the iterations is blank
- The encoded batchnorm file is zero'd out
However, the code seems to function as intended otherwise (e..g no errors). I've noted that your training data is scaled [0,1], while mine is scaled to be centered on 0, with unit variance (e.g. not [0,1])--could this be the issue?
spadavec commented
Ok, re-scaled my data to [0,1], and that fixed the issue. Unfortunately, this doesn't fit the rest of my pipeline; is there any guidance on how to adjust the training so it can accommodate data scaled differently?
gwaybio commented
Can try swapping different loss functions
cgreene commented
😂 yup - wrong thread! 🤦♂️