fjxmlzn/DoppelGANger

Reproducing Figure 1 WWT again

CubicQubit opened this issue · 26 comments

Hi @fjxmlzn,

Thank you for the phenomenal effort on the repo. Additionally, thank you for sharing the code the calculating the autocorrelation, it helped me to match the autocorrelation curve in Figure 1.

I'm trying to reproduce the curve for DoppelGANger in Figure 1 below:
Screen Shot 2021-06-09 at 5 21 46 PM

First of all, I've used the WWT data provided in the GDrive for training and testing. In addition, I have run the doppleframework provided in the example_training (withoutGPU_TASK). I'm struggling to match the curve in Figure 1 with two different sample_len.

sample_len=5

acf_5

sample_len=10

acf_10

I was wondering if you can help me understand what went wrong and how I can reproduce the performance in the paper.
The hyperparameters used are:

"batch_size": 100,
"vis_freq": 200,
"vis_num_sample": 5,
"d_rounds": 1,
"g_rounds": 1,
"num_packing": 1,
"noise": True,
"feed_back": False,
"g_lr": 0.001,
"d_lr": 0.001,
"d_gp_coe": 10.0,
"gen_feature_num_layers": 1,
"gen_feature_num_units": 100,
"gen_attribute_num_layers": 3,
"gen_attribute_num_units": 100,
"disc_num_layers": 5,
"disc_num_units": 200,
"initial_state": "random",

"attr_d_lr": 0.001,
"attr_d_gp_coe": 10.0,
"g_attr_d_coe": 1.0,
"attr_disc_num_layers": 5,
"attr_disc_num_units": 200,

"epoch": [400],
"sample_len": [5, 10],
"extra_checkpoint_freq": [5],
"epoch_checkpoint_freq": [1],
"aux_disc": [True],
"self_norm": [True]

I know this is annoying and sorry to bother you! I'm trying to understand the proper and necessary parameters needed to achieve your results. Could you share the hyperparameters that worked best for you that may help to recover the curves in Figure 1? Thank you and I appreciate it.

Hi,

The hyperparameters you listed should be the ones we used for generating Figure 1. (The only difference is that we used sample_len=10.) The results you showed look very different to what we got. Here are some points I want to double-check. If you are using example_training(without_GPUTaskScheduler), then

  1. These hyperparameters are set inside https://github.com/fjxmlzn/DoppelGANger/blob/master/example_training(without_GPUTaskScheduler)/main.py, NOT https://github.com/fjxmlzn/DoppelGANger/blob/master/example_training/config.py.
  2. The results will be saved in example_training(without_GPUTaskScheduler)/test, NOT results. And since you are not using GPUTaskScheduler, the newer runs will override the results in this folder. You need to manage it manually.
  3. Make sure you finished all 400 epochs (as set in the parameters)

Let me know if you still have problems reproducing it.

Okay, I understand your points. That's exactly what I did.

  1. The hyperparameters are in main.py from the example_training(without_GPU) folder. They are:
generator = DoppelGANgerGenerator(
        feed_back=False,
        noise=True,
        feature_outputs=data_feature_outputs,
        attribute_outputs=data_attribute_outputs,
        real_attribute_mask=real_attribute_mask,
        sample_len=sample_len)
discriminator = Discriminator()
attr_discriminator = AttrDiscriminator()

epoch = 400
batch_size = 100
vis_freq = 200
vis_num_sample = 5
d_rounds = 1
g_rounds = 1
d_gp_coe = 10.0
attr_d_gp_coe = 10.0
g_attr_d_coe = 1.0
extra_checkpoint_freq = 5
num_packing = 1

which should be the same as in the parameters in config.py

  1. I can confirm the results are saved in example_training(without_GPUTaskScheduler)/test, and those are the mid-checkpoints that I used to generate the samples. I wrote a generate.py without_GPUTakScheduler, load those mid_checkpoints, and generate 100,000 samples (50k train, 50k test) to compare against the real data. This file basically just copy the generate_task from example_generating_data.

  2. I trained for all 400 epochs, this takes roughly 11-12 hrs clock time. However, I used the data sampled from the checkpoint at epoch 399 to generate the Figure 1. Maybe checkpoint 399 suffered from overfitting? Did you use a different checkpoint?

We used checkpoint 399 to generate it. I am not sure why it had bad resutls. Would you mind sharing the entire code folder (including example_training(without_GPUTaskScheduler)/main.py and generate.py you wrote) via Google Drive or others, so that I can look into it?

Hi, thank you for offering to help look at the code. I'm honestly not sure what went wrong either.

Here is the GDrive link with the code and epoch_id-399 data (generated samples, viz samples): https://drive.google.com/drive/folders/1M2QvzZjyEP9xevFYjNNurFmxEZKhVrEU?usp=sharing

Let me know if I can help with anything else. There should also be a TensorBoard file there.

I reproduced the same Figure-1.
acf

I'm guessing you used the training version with GPUTask and sample_len=10 right. I'll try that one next then.

I used without GPUTask and sample_len=10.

Oh okay nice, then I'm guessing it's either 1) I got a bad run, so might rerun again. or 2) I set up the python environment wrong (idk bad tensorflow, some floating points stuff). If you are using conda or any env, can you share with me your pip freeze. Thanks @alireza-msve. This is mainly because I also just ran the version without GPUTask without messing with anything else.

Edit: wait if you also used the version without GPUTask, did you write an extra file to generate the time-series, my generate.py file is in the GDrive, it should be the same as the generate task file with GPUTask.

I used your generate.py file. Pyhton-3.7.10 and tensorflow-1.14.0

Thank you @alireza-msve very much for sharing the information!

@CubicQubit Thanks for sharing the code you used. I wanted to debug this for you but unfortunately I just ran out of GPU hours on the cluster I am using several days ago, and it will take some time before I get the GPU hours. But here is some information that might be helpful:

  1. With or without GPUTaskScheduler shouldn't have any influence on the result, if the hyper-parameters are the same.
  2. For the results in the paper, we had 3 random trials for this dataset with sample_len=10. We picked a random run for drawing figure 1. I just checked all these runs and all of them have much better autocorrelation than the one you got:
    autocorr_all_runs.pdf

Since @alireza-msve used exactly the code you shared, I would suggest running it again to double-check. If you still get bad autocorrelation plots, please let me know.

If you have some time could please look at the below code for autocorrelation. I see for different epsilon values the figure doesn't change at all.
`
import torch

EPS = 0.55

def autocorr(X, Y):
Xm = torch.mean(X, 1).unsqueeze(1)
Ym = torch.mean(Y, 1).unsqueeze(1)
r_num = torch.sum((X - Xm) * (Y - Ym), 1)
r_den = torch.sqrt(torch.sum((X - Xm) ** 2, 1) * torch.sum((Y - Ym) ** 2, 1))

r_num[r_num == 0] = EPS
r_den[r_den == 0] = EPS

r = r_num / r_den
r[r > 1] = 0
r[r < -1] = 0

return r

def get_autocorr(feature):
feature = torch.from_numpy(feature)
feature_length = feature.shape[1]
autocorr_vec = torch.Tensor(feature_length - 2)

for j in range(1, feature_length - 1):
    autocorr_vec[j - 1] = torch.mean(autocorr(feature[:, :-j], feature[:, j:]))

return autocorr_vec

`

This EPS is for ensuring numerical stability when calculating autocorrelation, NOT the DP parameter. You should not change it.

The epsilon in DP results is controlled by

"dp_noise_multiplier": [0.01, 0.1, 1.0, 2.0, 4.0],
. The code will print the epsilon in DP parameter computed from it.

This EPS is for ensuring numerical stability when calculating autocorrelation, NOT the DP parameter. You should not change it.

The epsilon in DP results is controlled by

"dp_noise_multiplier": [0.01, 0.1, 1.0, 2.0, 4.0],

. The code will print the epsilon in DP parameter computed from it.

Is it possible to share the code for DP-autocorrelation?

The code is completely the same. You just generate data using https://github.com/fjxmlzn/DoppelGANger/tree/master/example_dp_generating_data, and then use #20 (comment) to draw autocorrelation.

The DP parameter (including epsilon) is printed from

print("Using DP training")
print("The final DP parameters will be:")
compute_dp_sgd_privacy(
self.data_feature.shape[0],
self.batch_size * self.num_packing,
noise_multiplier,
self.epoch * self.num_packing,
self.dp_delta)
sys.stdout.flush()
when you do training.

The code is completely the same. You just generate data using https://github.com/fjxmlzn/DoppelGANger/tree/master/example_dp_generating_data, and then use #20 (comment) to draw autocorrelation.

The DP parameter (including epsilon) is printed from

print("Using DP training")
print("The final DP parameters will be:")
compute_dp_sgd_privacy(
self.data_feature.shape[0],
self.batch_size * self.num_packing,
noise_multiplier,
self.epoch * self.num_packing,
self.dp_delta)
sys.stdout.flush()

when you do training.

Got it, Thank you.
But first two epsilon values are same with the updated arxiv version. But, the last three values - "eps = 9.39", "eps = 1.12", "eps = 0.349" are different.
where the "dp_noise_multiplier": [0.01, 0.1, 1.0, 2.0, 4.0]"

This is weird. Here is a minimum code for computing these epsilons.

from tensorflow_privacy.privacy.analysis.compute_dp_sgd_privacy_lib import compute_dp_sgd_privacy

if __name__ == "__main__":
    NOISE_MULTIPLIERS = [0.01, 0.1, 1.0, 2.0, 4.0]
    EPOCH = 15
    EPSILONS = [
        compute_dp_sgd_privacy(
            50000,
            100,
            noise_multiplier * 0.5,
            EPOCH,
            1e-5)[0]
        for noise_multiplier in NOISE_MULTIPLIERS]
    print(EPSILONS)

I am getting [187266998.24801102, 1641998.2480110272, 10.515654630508177, 1.451819290643501, 0.45555693961174304], which are the numbers of the arxiv version.

If you get different numbers from it, then probably it is because of TF Privacy updates. I am using TF Privacy 0.5.1.

Probably you are right. I run the above code again got similar values as previous. I am using TF privacy 0.6.0

Just double-checking, you mean you get the values you shared in #22 (comment) right?

Yes

Cool. Then it should be due to TF Privacy updates.

@fjxmlzn @fxctydfty man i hate tf. do you guys see these errors when running main.py? I'm seriously thinking it's my environment:

WARNING:tensorflow:Entity <bound method MultiRNNCell.call of <tensorflow.python.ops.rnn_cell_impl.MultiRNNCell object at 0x7f063006e350>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method MultiRNNCell.call of <tensorflow.python.ops.rnn_cell_impl.MultiRNNCell object at 0x7f063006e350>>: AttributeError: module 'gast' has no attribute 'Index' WARNING:tensorflow:From /home/loctrinh/anaconda3/envs/doppelganger/lib/python3.7/site-packages/tensorflow/python/ops/rnn_cell_impl.py:961: calling Zeros.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version. Instructions for updating: Call initializer instance with the dtype argument instead of passing it to the constructor WARNING:tensorflow:Entity <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x7f05c4197b90>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x7f05c4197b90>>: AttributeError: module 'gast' has no attribute 'Index' WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f05bc1a1410>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f05bc1a1410>>: AttributeError: module 'gast' has no attribute 'Index'

pip install gast==0.2.2 --force-reinstall might fix this.

acf

I got something closer after rerunning. Still not the same, but I take it. @fjxmlzn thank you for your help! Please close this issue.

Great! This one looks close to what it should get.

Thank you @alireza-msve very much for sharing the information!

@CubicQubit Thanks for sharing the code you used. I wanted to debug this for you but unfortunately I just ran out of GPU hours on the cluster I am using several days ago, and it will take some time before I get the GPU hours. But here is some information that might be helpful:

  1. With or without GPUTaskScheduler shouldn't have any influence on the result, if the hyper-parameters are the same.
  2. For the results in the paper, we had 3 random trials for this dataset with sample_len=10. We picked a random run for drawing figure 1. I just checked all these runs and all of them have much better autocorrelation than the one you got:
    autocorr_all_runs.pdf

Since @alireza-msve used exactly the code you shared, I would suggest running it again to double-check. If you still get bad autocorrelation plots, please let me know.

image

After training for 12.5 hrs, using the without GPUTaskScheduler, on local machine (RTX 3060), this was my plot for ACF. I used Python 3.7.0 and Tensorflow 1.14.0. What's going on haha

@rllyryan Thanks for sharing the plot, but it looks weird. In follow-up projects, we ran DoppelGANGer on this dataset several more times, and we were able to get good autocorrelation plots quite stably

Let's discuss it in #46