/DPO

Implementation of DPRL

Primary LanguagePythonMIT LicenseMIT

Data Privacy Optimization

Code for implementation of my undergraduate thesis - "Value-oriented Privacy Optimization in Model Based Data Marketplace".

Prerequisites

  • Python, NumPy, Scikit-learn, Tqdm, PyTorch

Basic Usage

Including two parts:

  • DPRL (Data Privacy optimization using Reinforcement Learning)

  • FDPRL (Federated Data Privacy optimization using Reinforcement Learning)

Attention: FDPRL is the federated version of DPRL, although there are some difference in framework design.

DPRL offers these abilities:

  1. Given an epsilon budget list, optimize the allocation in value-oriented. (discrete)
  2. Only the whole epsilon budget is given, optimize the budget distribution. (continuous)

Run Example Experiments

$ python3 examples.py

If you have browser env, jupyter notebook is recommended.

$ jupyter_notebook examples.ipynb

Documents

More detailed usages and code implementation can refer to the documents.

$ make doc

(* Documents are powered by Sphinx.)

License

This project is licensed under the MIT License - see the LICENSE file for details.

Reference

[1] Collecting and Analyzing Multidimensional Data with Local Differential Privacy (ICDE '19)

[2] Data Valuation using Reinforcement Learning (ICML '20)

[3] Differentially Private Federated Learning: A Client Level Perspective (ICLR '19)

[4] Learning Differentially Private Recurrent Language Models (ICLR '18)