netsharecmu/NetShare

How to train the model with naive differential privacy?

lurw2000 opened this issue · 2 comments

I'm trying to train the model on a small test dataset with naive differential privacy. I tried to change something in the configuration, but I got empty output.
Here is the changed configuration

{
    "global_config": {
        "original_data_file": "../traces/simple-network/Switch1_Ethernet1_to_PC1_Ethernet0-correct/raw.pcap",
        "dataset_type": "pcap",
        "n_chunks": 1,
        "dp": true
    },
    "model_manager": {
        "class": "NetShareManager",
        "config": {
            "pretrain_non_dp": false,
            "pretrain_non_dp_reduce_time": null,
            "pretrain_dp": false
        }
    },
    "model": {
        "class": "DoppelGANgerTFModel",
        "config": {
            "batch_size": 1,
            "sample_len": [
                1
            ],
            "iteration": 80000,
            "extra_checkpoint_freq": 4000,
            "epoch_checkpoint_freq": 1000,
            "gen_feature_num_layers": 1,
            "gen_feature_num_units": 100,
            "gen_attribute_num_layers": 1,
            "gen_attribute_num_units": 32,
            "disc_num_layers": 1,
            "disc_num_units": 32,
            "attr_disc_num_layers": 1,
            "attr_disc_num_units": 32,
            "dp_noise_multiplier": 0.2797,
            "dp_l2_norm_clip": 1.0
        }
    },
    "default": "pcap.json"
}

I suspect it is partly because I set pretrain_dp=false. But if pretrain_dp=true, I will be asked to provide a model pretrained with public dataset.

We have not ported the full DP functionality in this "newer" version of the codebase and hopefully, it will be done soon. Apologize for the potential inconvenience :(

We will keep this issue open and keep you posted once we have finished that part. Thanks for your patience!

We recently refactored the code to PyTorch and the support of differential privacy of Wasserstein loss is, unfortunately, a known open issue in Opacus library. We may find a workaround in the future to support this.