/VFLAIR

THU-AIR Vertical Federated Learning general, extensible and light-weight framework

Primary LanguagePython

VFLAIR

Overview

Basic Introduction

VFLAIR is a general, extensible and light-weight VFL framework that provides vanilar VFL training and evaluation process simulation alonging with several effective communication improvement methods as well as attack and defense evaluations considering data safety and privacy. Aside from NN serving as local models for VFL systems, tree-based VFL is also supported.

Code Structure

VFLAIR
├── src
│   ├── evaluates           
│   |   ├── attacks                    # Attack Simulator,Implementation of attacks
│   │   |   ├── ...                    # Multiple Attack Implementation
│   |   ├── defenses                   # Implementation of defenses
│   │   |   ├── Trained CAE momdels    # Trained encoder-decoder models for CAE and DCAE
│   │   |   ├── ...                    # Defense Implementation & Functions
│   |   ├── MainTaskVFL                # Pipeline for BasicVFL & VFL with LI/FR/NTB
│   |   ├── MainTaskVFLwithBackdoor    # Pipeline for VFL with TB     
│   |   ├── MainTaskVFLwithNoisySample # Pipeline for VFL with NTB-NSB    
│   |   ├── MainTaskTVFL               # Pipeline for Tree-based VFL
│   ├── load                           # Load Configurations into training pipeline
│   |   ├── LoadConfigs.py             # Load basic parameters   
│   |   ├── LoadDataset.py             # Load dataset and do data partition
│   |   ├── LoadModels.py              # Initialize models
│   |   ├── LoadParty.py               # Initialized parties with data and model
│   |   ├── LoadTreeConfigs.py         # Load basic parameters   
│   |   ├── LoadTreeParty.py           # Initialized parties with data and model
│   ├── configs                        # Customizable configurations    
│   |   ├── standard_configs           # Standard configurations for NN-based VFL
│   │   │   ├── ...   
│   |   ├── active_party_attack        # Standard configurations for active party attack
│   │   │   ├── ...   
│   |   ├── passive_party_attack       # Standard configurations for passive party attack
│   │   │   ├── ...   
│   |   ├── tree                       # Standard configurations for tree-based VFL 
│   │   │   ├── ...   
│   |   ├── README.md                  # Guidance for configuration files 
│   |   ├── README_TREE.md             # Guidance for testing tree-based VFL
│   ├── models                         # bottom models & global models     
│   |   ├── model_parameters           # Some pretrained models
│   │   ├── ...                        # Implemented bottome models & global models
│   ├── party                          # party simulators   
│   |   ├── ...
│   ├── dataset                        # Dataset preprocessing functions       
│   |   ├── ...
│   ├── utils                          # Basic functions and Customized functions for attack&defense
│   |   ├── ...
│   ├── exp_result                     # Store experiment results
│   |   ├── ...
│   ├── metrics                        # Benchmark and Defense Capability Score (DCS) definition
│   |   ├── ...
│   ├── main_pipeline.py               # Main VFL(launch this file for NN based VFL)  
│   ├── main_tree.py                   # Main Tree-based VFL(launch this file for tree-based VFL)  
├── usage_guidance                     # Detailed Usage  
│   ├── figures
│   |   ├── ...
│   ├── Add_New_Algorithm.md           # Guidance on how to add user defined attacks and defenses algorithms
│   ├── Dataset_Usage.md               # Guidance on how to achieve dataset for experiments
├── README.md
├── requirements.txt                   # installation requirement, we mainly use pytorch3.8 for experiments

Quick Start

Zero. Environment Preparation

  1. Download code files and install all the necessary requirements.
      # clone the repository
      $ git clone <link-to-our-github-repo>
      
      # install required packages
      $ conda create -n VFLAIR python=3.8
      $ conda activate VFLAIR
      $ pip install --upgrade pip
      $ cd VFLAIR
      $ pip install -r requirements.txt
      
      # install cuda related pytorch
      $ pip install torch==1.10.1+cu113 torchvision==0.11.2+cu113 torchaudio==0.10.1 -f https://download.pytorch.org/whl/cu113/torch_stable.html
  2. New datasets, except MNIST, CIAFR10 and CIAFR100, can be put under ../../share_dataset/ folder on your device.

One. Basic Benchmark Usage: A Quick Example

  1. Customize your own configurations.

    • Create a json file for your own evaluation configuration in /src/configs folder. Name it whatever you want, like my_configs.json.
    • /src/configs/basic_configs.json is a sample configuration file. You can copy it and modify the contents for your own purpose.
    • For detail information about configuration parameters, see /src/configs/README.md for detail information.
  2. Use cd src and python main_pipeline.py --gpu 0 --configs <your_config_file_name> to start the evaluation process. A quick example can be launched by simplying using cd src and python main_pipeline.py (a vanilar VFL training and testing process is launched). For more detail descriptions, see Section Two.

Two. Advanced Usage: Implement Your Own Algorithm

  • How to add new attack/defense?
    • usage_guidance/Add_New_Evaluation.md
  • Dataset Usage?
    • usage_guidance/Dataset_Usage.md
  • How to write Configuration files and how to specify hyper-parameters for evaluation?
    • src/config/README.md and src/config/README_TREE.md
  • What is Defense Capability Score (DCS)?
    • Refer to src/metrics for details.

Contributing

We greatly appreciate any contribution to VFLAIR! Also, we'll continue to improve our framework and documentation to provide more flexible and convenient usage.

Please feel free to contact us if there's any problem with the code base or documentation!

Citation

If you are using VFLAIR for your work, please cite our paper with:

@article{zou2023vflair,
  title={VFLAIR: A Research Library and Benchmark for Vertical Federated Learning},
  author={Zou, Tianyuan and Gu, Zixuan and He, Yu and Takahashi, Hideaki and Liu, Yang and Ye, Guangnan and Zhang, Ya-Qin},
  journal={arXiv preprint arXiv:2310.09827},
  year={2023}
}