VivienneCelia's Stars
encryptogroup/SAFEFL
SAFEFL: MPC-friendly Framework for Private and Robust Federated Learning
limbopro/Paolujichang
科学上网🕸️之跑路机场名单收集(2020-2024),欢迎投稿。
FederatedAI/research
caserec/Datasets-for-Recommender-Systems
This is a repository of a topic-centric public data sources in high quality for Recommender Systems (RS)
rdz98/FedRecAttack
Model Poisoning Attack to Federated Recommendation
lokinko/Federated-Learning
联邦学习
JonasGeiping/poisoning-gradient-matching
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Sanghyun-Hong/Gradient-Shaping
[Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping
git-disl/DataPoisoning_FL
Code for Data Poisoning Attacks Against Federated Learning Systems
jinyuan-jia/BaggingCertifyDataPoisoning
YuYang0901/EPIC
Not All Poisons are Created Equal: Robust Training against Data Poisoning (ICML 2022)
ebarkhordar/shilling-attack-detection-for-recommender-systems
shilling attack detection for recommender systems
yjw1029/UA-FedRec
The python implementation of our "UA-FedRec: Untargeted Attack on Federated News Recommendation" in KDD 2023.
fuying-wang/Data-poisoning-attacks-on-factorization-based-collaborative-filtering
Data poisoning attack of recommend system using the algorithm of MF.
Yueeeeeeee/RecSys-Extraction-Attack
[RecSys 2021] PyTorch Implementation of Black-Box Attacks on Sequential Recommenders via Data-Free Model Extraction
DACXMines/Tesseract
Implementation and exploration of the paper Tesseract: Gradient Flip Score to Secure Federated Learning against Model Poisoning Attacks
DistributedML/FoolsGold
A sybil-resilient distributed learning protocol.
cpwan/Attack-Adaptive-Aggregation-in-Federated-Learning
This is the code for our paper `Robust Federated Learning with Attack-Adaptive Aggregation' accepted by FTL-IJCAI'21.
michaelTJC96/Label_Flipping_Attack
The project aims to evaluate the vulnerability of Federated Learning systems to targeted data poisoning attack known as Label Flipping Attack. The project studies the scenario that a malicious participant can only manipulate the raw training data on their device. Hence, non-expert malicious participants can achieve poisoning without knowing the model type, the parameters, and the Federated Learning process. In addition, the project also analyses the possibility and effectiveness of concealing the tracks while poisoning the raw data of other devices.
med-air/FL-COVID
[npj Digital Medicine'21] Federated deep learning for detecting COVID-19 lung abnormalities in CT: a privacy-preserving multinational validation study. (Nature publishing group)
janerjzou/AD_FL_DL
Apply Federated Learning and Deep Learning (Deep Auto-encoder) to detect abnormal data for IoT devices.
smallsmallstrong/Attacks-and-Defenses-in-Federated-Learning
fyfserena/Pratical-DL-Sys-Performance-Robustness-Security
Columbia Univeristy COMS6998 section 07 and 12
fnc11/CosDefence
A defence mechanism against Data Poisoning attacks in Federated Learning (FL).
MrWater98/backdoors101
Suyi32/Learning-to-Detect-Malicious-Clients-for-Robust-FL
jgshu/Attacks-and-Defenses-in-Federated-Learning
ebagdasa/backdoor_federated_learning
Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)
jeremy313/FL-WBC
Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective".
innovation-cat/Awesome-Federated-Machine-Learning
Everything about federated learning, including research papers, books, codes, tutorials, videos and beyond