privacy-preserving-machine-learning
There are 96 repositories under privacy-preserving-machine-learning topic.
EthicalML/awesome-production-machine-learning
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
jphall663/awesome-machine-learning-interpretability
A curated list of awesome responsible machine learning resources.
innovation-cat/Awesome-Federated-Machine-Learning
Everything about federated learning, including research papers, books, codes, tutorials, videos and beyond
pytorch/opacus
Training PyTorch models with differential privacy
securefederatedai/openfl
An Open Framework for Federated Learning.
LatticeX-Foundation/Rosetta
A Privacy-Preserving Framework Based on TensorFlow
trailofbits/PrivacyRaven
Privacy Testing for Deep Learning
microsoft/robustdg
Toolkit for building machine learning models that generalize to unseen domains and are robust to privacy and other attacks.
snwagh/securenn-public
Implementation of protocols in SecureNN.
awslabs/fast-differential-privacy
Fast, memory-efficient, scalable optimization of deep learning with differential privacy
ucbrise/piranha
Piranha: A GPU Platform for Secure Computation
snwagh/falcon-public
Implementation of protocols in Falcon
APPFL/APPFL
Advanced Privacy-Preserving Federated Learning framework
DiscreetAI/decentralized-ml
Full stack service enabling decentralized machine learning on private data
yamanalab/PP-CNN
Privacy Preserving Convolutional Neural Network using Homomorphic Encryption for secure inference
FIGLAB/Vid2Doppler
This is the research repository for Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition.
sisaman/GAP
GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation (USENIX Security '23)
ayushm-agrawal/Federated-Learning-Implementations
This repository contains all the implementation of different papers on Federated Learning
LukasStruppek/Plug-and-Play-Attacks
[ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks" and "Be Careful What You Smooth For".
shreya-28/Secure-ML
Secure Linear Regression in the Semi-Honest Two-Party Setting.
leriomaggio/ppml-tutorial
Privacy-Preserving Machine Learning (PPML) Tutorial
dilawarm/federated
Privacy-Preserving Federated Learning Applied to Decentralized Data
microsoft/responsible-ai-toolbox-privacy
A library for statistically estimating the privacy of ML pipelines from membership inference attacks
hharcolezi/ldp-protocols-mobility-cdrs
Implementation of local differential privacy mechanisms in Python language.
JiangChSo/PFLM
Privacy-preserving federated learning is distributed machine learning where multiple collaborators train a model through protected gradients. To achieve robustness to users dropping out, existing practical privacy-preserving federated learning schemes are based on (t, N)-threshold secret sharing. Such schemes rely on a strong assumption to guarantee security: the threshold t must be greater than half of the number of users. The assumption is so rigorous that in some scenarios the schemes may not be appropriate. Motivated by the issue, we first introduce membership proof for federated learning, which leverages cryptographic accumulators to generate membership proofs by accumulating users IDs. The proofs are issued in a public blockchain for users to verify. With membership proof, we propose a privacy-preserving federated learning scheme called PFLM. PFLM releases the assumption of threshold while maintaining the security guarantees. Additionally, we design a result verification algorithm based on a variant of ElGamal encryption to verify the correctness of aggregated results from the cloud server. The verification algorithm is integrated into PFLM as a part. Security analysis in a random oracle model shows that PFLM guarantees privacy against active adversaries. The implementation of PFLM and experiments demonstrate the performance of PFLM in terms of computation and communication.
AlanPeng0897/Defend_MI
[KDD 2022] "Bilateral Dependency Optimization: Defending Against Model-inversion Attacks"
chamathpali/FedSim
Similarity Guided Model Aggregation for Federated Learning
mmalekzadeh/privacy-preserving-bandits
Privacy-Preserving Bandits (MLSys'20)
athenarc/smpc-analytics
📊 Privacy Preserving Medical Data Analytics using Secure Multi Party Computation. An End-To-End Use Case. A. Giannopoulos, D. Mouris M.Sc. thesis at the University of Athens, Greece.
amartya18x/tapas
Tricks for Accelerating (encrypted) Prediction As a Service
Lucieno/gforce-public
A crypto-assisted framework for protecting the privacy of models and queries in inference.
inaccel/heflow
Open source platform for the privacy-preserving machine learning lifecycle
mikeroyal/Differential-Privacy-Guide
Differential Privacy Guide
TTitcombe/NoPeekNN
PyTorch implementation of NoPeekNN
barlettacarmen/CrCNN
Crypto-Convolutional Neural Network library written on top of SEAL 2.3.1
eric-ai-lab/FedVLN
[ECCV 2022] Official pytorch implementation of the paper "FedVLN: Privacy-preserving Federated Vision-and-Language Navigation"