A collection of tools and resources for managing the statistical disclosure control of trained machine learning models. For a brief introduction, see Smith et al. (2022).
A collection of user guides can be found in the 'user_stories' folder of this repository. These guides include configurable examples from the perspective of both a researcher and a TRE, with separate scripts for each. Instructions on how to use each of these scripts and which scripts to use are included in the README of the user_stories
folder.
aisdc
attacks
Contains a variety of privacy attacks on machine learning models, including membership and attribute inference.preprocessing
Contains preprocessing modules for test datasets.safemodel
The safemodel package is an open source wrapper for common machine learning models. It is designed for use by researchers in Trusted Research Environments (TREs) where disclosure control methods must be implemented. Safemodel aims to give researchers greater confidence that their models are more compliant with disclosure control.
docs
Contains Sphinx documentation files.example_notebooks
Contains short tutorials on the basic concept of "safe_XX" versions of machine learning algorithms, and examples of some specific algorithms.examples
Contains examples of how to run the code contained in this repository:- How to simulate attribute inference attacks
attribute_inference_example.py
. - How to simulate membership inference attacks:
- Worst case scenario attack
worst_case_attack_example.py
. - LIRA scenario attack
lira_attack_example.py
.
- Worst case scenario attack
- Integration of attacks into safemodel classes
safemodel_attack_integration_bothcalls.py
.
- How to simulate attribute inference attacks
risk_examples
Contains hypothetical examples of data leakage through machine learning models as described in the Green Paper.tests
Contains unit tests.
Documentation is hosted here: https://ai-sdc.github.io/AI-SDC/
Clone the repository and install the dependencies (safest in a virtual env):
$ git clone https://github.com/AI-SDC/AI-SDC.git
$ cd AI-SDC
$ pip install -r requirements.txt
Then run the tests:
$ pip install pytest
$ pytest .
Or run an example:
$ python -m examples.lira_attack_example
Install aisdc
(safest in a virtual env) and manually copy the examples
and example_notebooks
.
$ pip install aisdc
Then to run an example:
$ python attribute_inference_example.py
Or start up jupyter notebook
and run an example.
Alternatively, you can clone the repo and install:
$ git clone https://github.com/AI-SDC/AI-SDC.git
$ cd AI-SDC
$ pip install .
This work was funded by UK Research and Innovation under Grant Numbers MC_PC_21033 and MC_PC_23006 as part of Phase 1 of the DARE UK (Data and Analytics Research Environments UK) programme (https://dareuk.org.uk/), delivered in partnership with Health Data Research UK (HDR UK) and Administrative Data Research UK (ADR UK). The specific projects were Semi-Automatic checking of Research Outputs (SACRO -MC_PC_23006) and Guidelines and Resources for AI Model Access from TrusTEd Research environments (GRAIMATTER - MC_PC_21033). This project has also been supported by MRC and EPSRC [grant number MR/S010351/1]: PICTURES.