birdx0810/timegan-pytorch

Known issues regarding implementation.

Opened this issue · 3 comments

The following are some problems with the code I noticed while I was trying to reproduce results of the original paper.

  1. Public dataset and preprocessing methods should be updated for reproducing results in original paper.

  2. The noise sampling method is different from the original code.
    During reproducing the results, using the same noise sampling mechanism fails to fit the original dataset B * torch.uniform((S, Z))

    Z_mb = torch.rand((args.batch_size, args.max_seq_len, args.Z_dim))

Should be changed to something like the code below, and not torch.random((B, S, Z)) to follow a more Wiener Process.

Z_mb  = torch.zeros(B, S, Z)
for idx in batch_size:
    Z_mb[idx] = torch.random(S, Z)
  1. The MSE losses are not respected to sequence length, this would make the model learn padding values when the sequence lengths are not of equal length. This is issue should be highlighted at all calculations of MSE, especially in recovery and supervisor forward pass. This should not be an issue if the public dataset is being used.

    E_loss_T0 = torch.nn.functional.mse_loss(X_tilde, X)

  2. G_loss is wrong in logging, accidental addition of torch.sqrt that is not in original code

    G_loss = np.sqrt(G_loss.item())

  3. Paddings should be added during inference stage.

  4. Original code has a sigmoid activation function. Although Hide-and-Seek competition did not added this if I'm not mistaken, probably heuristics.

    return X_tilde

  5. Arguments if the loss should be instance or stepwise. To be experimented.
    jsyoon0823/TimeGAN#11 (comment)

These issues have not been resolved as of now. Might take awhile to resolve as I have other things to settle.
Feel free to submit a PR.

Hi @birdx0810, regarding point no. 5 above, was there a reason for excluding the sigmoid on the recovery network output?

Was it just to allow the network to learn real outputs instead of only (0, 1)? Or some other reason?

Hi @birdx0810, regarding point no. 5 above, was there a reason for excluding the sigmoid on the recovery network output?

Was it just to allow the network to learn real outputs instead of only (0, 1)? Or some other reason?

@eonu Sorry for the late reply, but this code base was derived from the NeurIPS 2020 hide-and-seek privacy challenge hosted by Van Der Schaar Lab, which is the team that proposed the TimeGAN model. I assume that it was just accidentally left out and I only realized it later. It should be added IMO, which is why I listed it in this issue.