- About
- Secure Machine Learning
- Secure Federated Learning
- MPC
- Federated Learning
- Privacy Leakages of ML/FL
- Blogs
- Libraries and Frameworks
This is a current list of resources related to the research and development of privacy-preserving machine learning.
- Machine Learning Classification over Encrypted Data, NDSS'14
- Oblivious Multi-Party Machine Learning on Trusted Processors, USENIX SECURITY'16
- Prio: Private, Robust, and Scalable Computation of Aggregate Statistics, NSDI'17
- SecureML: A System for Scalable Privacy-Preserving Machine Learning, S&P'17
- MiniONN: Oblivious Neural Network Predictions via MiniONN Transformations, CCS'17
- Chameleon: A Hybrid Secure Computation Framework for Machine Learning Applications, AsiaCCS'17
- DeepSecure: Scalable Provably-Secure Deep Learning, DAC'17
- Secure Computation for Machine Learning With SPDZ, NIPS'18
- ABY3:a Mixed protocol Framework for Machine Learning, CCS'18
- SecureNN: Efficient and Private Neural Network Training, PoPETs'18
- Gazelle: A Low Latency Framework for Secure Neural Network Inference, USENIX SECURITY'18
- CHET: an optimizing compiler for fully-homomorphic neural-network inferencing, PLDI'19
- New Primitives for Actively-Secure MPC over Rings with Applications to Private Machine Learning, S&P'19
- Helen: Maliciously Secure Coopetitive Learning for Linear Models, S&P'19
- Efficient multi-key homomorphic encryption with packed ciphertexts with application to oblivious neural network inference. CCS'19
- XONN: XNOR-based Oblivious Deep Neural Network Inference, USENIX Security'19
- QUOTIENT: two-party secure neural network training and prediction, CCS'19
- Secure Evaluation of Quantized Neural Networks, PoPETs'20
- ASTRA: High Throughput 3PC over Rings with Application to Secure Prediction, CCSW'19
- SoK: Modular and Efficient Private Decision Tree Evaluation, PoPETs'19
- Trident: Efficient 4PC Framework for Privacy Preserving Machine Learning, NDSS'20
- BLAZE: Blazing Fast Privacy-Preserving Machine Learning, NDSS'20
- FLASH: Fast and Robust Framework for Privacy-preserving Machine Learning, PoPETs'20
- Delphi: A Cryptographic Inference Service for Neural Networks, USENIX SECURITY'20
- FALCON: Honest-Majority Maliciously Secure Framework for Private Deep Learning, PoPETs'21
- MP2ML: A Mixed-Protocol Machine Learning Framework for Private Inference, ARES'20
- SANNS: Scaling Up Secure Approximate k-Nearest Neighbors Search, USENIX Security'20
- PySyft: A Generic Framework for Privacy Preserving Deep Learning
- Private Deep Learning in TensorFlow Using Secure Computation
- CryptoDL: Deep Neural Networks over Encrypted Data
- CryptoNets: Applying Neural Networks to Encrypted Data with High Throughput and Accuracy
- CrypTFlow: Secure TensorFlow Inference
- CrypTFlow2: Practical 2-Party Secure Inference, CCS'20
- ARIANN: Low-Interaction Privacy-Preserving Deep Learning via Function Secret Sharing
- Practical Privacy-Preserving K-means Clustering, PoPETs'20
- SWIFT: Super-fast and Robust Privacy-Preserving Machine Learning
- An Efficient 3-Party Framework for Privacy-Preserving Neural Network Inference, ESORICS'20
- Secure and Verifiable Inference in Deep Neural Networks, ACSAC'20
- Privacy-preserving Density-based Clustering, AisaCCS'21
- SIRNN: A Math Library for Secure RNN Inference, S&P'21
- Let’s Stride Blindfolded in a Forest: Sublinear Multi-Client Decision Trees Evaluation, NDSS'21
- MUSE: Secure Inference Resilient to Malicious Clients
- DeepReDuce: ReLU Reduction for Fast Private Inference, USENIX Security'21
- Garbled Neural Networks are Practical
- GForce : GPU-Friendly Oblivious and Rapid Neural Network Inference, USENIX Security'21
- CryptGPU: Fast Privacy-Preserving Machine Learning on the GPU, S&P'21
- GALA : Greedy ComputAtion for Linear Algebra in Privacy-Preserved Neural Networks, NDSS'21
- Fantastic Four: Honest-Majority Four-Party Secure Computation With Malicious Security, USENIX Security'21
- When homomorphic encryption marries secret sharing: secure large-scale sparse logistic regression and applications in risk control, KDD'21
- Glyph: Fast and Accurately Training Deep Neural Networks on Encrypted Data, NeurIPS'20
- Mystique: Efficient Conversions for Zero-Knowledge Proofs with Applications to Machine Learning, USENIX Security'21
- SoK: Efficient Privacy-preserving Clustering, PoPETs'21
- ZEN: Efficient Zero-Knowledge Proofs for Neural Networks
- zkCNN: Zero Knowledge Proofs for Convolutional Neural Network Predictions and Accuracy
- Secure Quantized Training for Deep Learning
- Cerebro: A Platform for Multi-Party Cryptographic Collaborative Learning, USENIX Security'21
- Tetrad: Actively Secure 4PC for Secure Training and Inference
- Adam in Private : Secure and Fast Training of Deep Neural Networks with Adaptive Moment Estimation
- Privacy-Preserving Deep Learning, CCS'15
- Practical Secure Aggregation for Privacy Preserving Machine Learning, CCS'17
- Privacy-Preserving Deep Learning via Additively Homomorphic Encryption, TIFS'17
- NIKE-based Fast Privacy-preserving High-dimensional Data Aggregation for Mobile Devices, CACR'18
- PrivFL: Practical Privacy-preserving Federated Regressions on High-dimensional Data over Mobile Networks, CCSW'19
- VerifyNet: Secure and verifiable federated learning, TIFS'19
- PrivColl: Practical Privacy-Preserving Collaborative Machine Learning
- NPMML: A Framework for Non-interactive Privacy-preserving Multi-party Machine Learning, TDSC'20
- SAFER: Sparse secure Aggregation for FEderated leaRning
- Secure Byzantine-Robust Machine Learning
- Secure Single-Server Aggregation with (Poly)Logarithmic Overhead, CCS'20
- Batchcrypt: Efficient homomorphic encryption for cross-silo federated learning, USENIX ATC'21
- FedSel: Federated SGD under Local Differential Privacy with Top-k Dimension Selection, DASFAA'20
- FLGUARD: Secure and Private Federated Learning, Cryptology Eprint'21
- Biscotti: A Blockchain System for Private and Secure Federated Learning, TPDS'21
- POSEIDON: Privacy-Preserving Federated Neural Network Learning, NDSS'21
- PPFL: Privacy-preserving Federated Learning with Trusted Execution Environments, MobiSys'21
- ABY: A Framework for Efficient Mixed-Protocol Secure Two-Party Computation
- Multiparty computation from somewhat homomorphic encryption, Crypto'12
- Practical covertly secure MPC for dishonest majority–or: breaking the SPDZ limits, ESORICS'13
- MASCOT: faster malicious arithmetic secure computation with oblivious transfer, CCS'16
- SPDZ^2k: Efficient MPC mod 2^k for Dishonest Majority, Crypto'18
- Overdrive^2k: Making SPDZ Great Again
- High-Throughput Semi-Honest Secure Three-Party Computation with an Honest Majority, CCS'16
- Sharemind: A framework for fast privacy-preserving computations, ESORICS'08
- Efficiently Verifiable Computation on Encrypted Data, CCS'14
- PrivPy: General and Scalable Privacy-Preserving Data Mining, KDD'19
- MP-SPDZ: A Versatile Framework for Multi-Party Computation, CCS'20
- MOTION - A Framework for Mixed-Protocol Multi-Party Computation, Cryptology ePrint 2020/1137
- ABY2.0: Improved Mixed-Protocol Secure Two-Party Computation (Full Version), USENIX Security'21
- Terngrad: Ternary gradients to reduce communication in distributed deep learning, NIPS'17
- The Convergence of Sparsified Gradient Methods, NIPS'18
- Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent, NIPS'17
- Byzantine stochastic gradient descent, NIPS'18
- The Hidden Vulnerability of Distributed Learning in Byzantium, ICML'18
- Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates, ICML'18
- Local Model Poisoning Attacks to Byzantine-Robust Federated Learning, USENIX Security'20
- FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping, NDSS'21
- Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning, NDSS'21
- Membership inference attacks against machine learning models, S&P'17
- Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning, S&P'19
- Data Poisoning Attacks Against Federated Learning Systems, ESORICS'20
- A Framework for Evaluating Client Privacy Leakages in Federated Learning, ESORICS'20
- A Critical Overview of Privacy in Machine Learning, IEEE Security & Privacy'21
- Cryptography and Machine Learning: Mixing both for private data analysis
- Building Safe A.I.: A Tutorial for Encrypted Deep Learning
- Awesome MPC: Curated List of resources for MPC
- 机器学习隐私保护
- TinyGarble: Logic Synthesis and Sequential Descriptions for Yao's Garbled Circuits
- SPDZ-2: Multiparty computation with SPDZ, MASCOT, and Overdrive offline phases
- ABY: A Framework for Efficient Mixed-Protocol Secure Two-Party Computation
- Obliv - C: C compiler for embedding privacy preserving protocols:
- TFHE: Fast Fully Homomorphic Encryption Library over the Torus
- SEAL: Simple Encypted Arithmatic Library
- PySEAL: Python interface to SEAL
- HElib: An Implementation of homomorphic encryption
- EzPC: programmable, efficient, and scalable secure two-party computation for machine learning
- CUDA-accelerated Fully Homomorphic Encryption Library
- CrypTen: A framework for Privacy Preserving Machine Learning
- tf-encrypted: A Framework for Machine Learning on Encrypted Data
- Sharemind
- PythonPaillier
- TenSEAL
- MP-SPDZ
- Securenn-public
- SecMML
- mnist-mpc
- Private-Set-Intersection
- falcon-public
- Rosetta