greenelab/tybalt

Zero'd out training

Closed this issue · 3 comments

Hello,

I've essentially copied your tybalt_vae.ipynb file line for line in a new file to train on some of my own data, but after training on my dataset, it appears the training isn't occurring at all:

  1. The plot of the loss over the iterations is blank
  2. The encoded batchnorm file is zero'd out

However, the code seems to function as intended otherwise (e..g no errors). I've noted that your training data is scaled [0,1], while mine is scaled to be centered on 0, with unit variance (e.g. not [0,1])--could this be the issue?

Ok, re-scaled my data to [0,1], and that fixed the issue. Unfortunately, this doesn't fit the rest of my pipeline; is there any guidance on how to adjust the training so it can accommodate data scaled differently?

👍 to @cgreene's comment here

Can try swapping different loss functions

😂 yup - wrong thread! 🤦‍♂️