You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The following are some problems with the code I noticed while I was trying to reproduce results of the original paper.
Public dataset and preprocessing methods should be updated for reproducing results in original paper.
The noise sampling method is different from the original code.
During reproducing the results, using the same noise sampling mechanism fails to fit the original dataset B * torch.uniform((S, Z))
The MSE losses are not respected to sequence length, this would make the model learn padding values when the sequence lengths are not of equal length. This is issue should be highlighted at all calculations of MSE, especially in recovery and supervisor forward pass. This should not be an issue if the public dataset is being used.
Hi @birdx0810, regarding point no. 5 above, was there a reason for excluding the sigmoid on the recovery network output?
Was it just to allow the network to learn real outputs instead of only (0, 1)? Or some other reason?
@eonu Sorry for the late reply, but this code base was derived from the NeurIPS 2020 hide-and-seek privacy challenge hosted by Van Der Schaar Lab, which is the team that proposed the TimeGAN model. I assume that it was just accidentally left out and I only realized it later. It should be added IMO, which is why I listed it in this issue.
The following are some problems with the code I noticed while I was trying to reproduce results of the original paper.
Public dataset and preprocessing methods should be updated for reproducing results in original paper.
The noise sampling method is different from the original code.
During reproducing the results, using the same noise sampling mechanism fails to fit the original dataset B *
torch.uniform((S, Z))
timegan-pytorch/models/utils.py
Line 114 in 7d04455
Should be changed to something like the code below, and not
torch.random((B, S, Z))
to follow a more Wiener Process.The MSE losses are not respected to sequence length, this would make the model learn padding values when the sequence lengths are not of equal length. This is issue should be highlighted at all calculations of MSE, especially in recovery and supervisor forward pass. This should not be an issue if the public dataset is being used.
timegan-pytorch/models/timegan.py
Line 414 in 7d04455
G_loss is wrong in logging, accidental addition of
torch.sqrt
that is not in original codetimegan-pytorch/models/utils.py
Line 120 in 7d04455
Paddings should be added during inference stage.
timegan-pytorch/models/utils.py
Line 271 in 7d04455
Original code has a sigmoid activation function. Although Hide-and-Seek competition did not added this if I'm not mistaken, probably heuristics.
timegan-pytorch/models/timegan.py
Line 150 in 7d04455
Arguments if the loss should be instance or stepwise. To be experimented.
Classification with Discriminator jsyoon0823/TimeGAN#11 (comment)
The text was updated successfully, but these errors were encountered: