Skyress1's Stars
cc233/CalFAT
The code for CalFAT: Calibrated Federated Adversarial Training with Label Skewness (NeurIPS 2022 paper)
warisgill/TraceFL
TraceFL is a novel mechanism for Federated Learning that achieves interpretability by tracking neuron provenance. It identifies clients responsible for global model predictions, achieving 99% accuracy across diverse datasets (e.g., medical imaging) and neural networks (e.g., GPT).
griffisben/Soccer-Analyses
The code here is pretty much everything I use to create my datasets (from FBRef) or visuals. Most my scatter plots are made in Tableau, though, using the dataset created from the FBRef dataset download
Harry24k/adversarial-defenses-pytorch
PyTorch implementations of Adversarial defenses and utils.
huanranchen/AdversarialAttacks
roboflow/sports
computer vision and sports
BioIntelligence-Lab/Flower-Medicalsegmentation
JonasGeiping/invertinggradients
Algorithms to recover input data from their gradient signal through a neural network
thunlp/TAADpapers
Must-read Papers on Textual Adversarial Attack and Defense
sattarov/FedTabDiff
Implementation of the paper: "FedTabDiff: Federated Learning of Diffusion Models for Synthetic Mixed-Type Tabular Data Generation"
Alexkael/Randomized-Adversarial-Training
IVRL/FastAdvL1
csdongxian/AWP
Codes for NeurIPS 2020 paper "Adversarial Weight Perturbation Helps Robust Generalization"
Alexkael/S2O
Blealtan/efficient-kan
An efficient pure-PyTorch implementation of Kolmogorov-Arnold Network (KAN).
prathameshtari/Predicting-Football-Match-Outcome-using-Machine-Learning
Football Match prediction using machine learning algorithms in jupyter notebook
changzhang777/ANCRA
official code of ANCRA
Hmzbo/Football-Analytics-with-Deep-Learning-and-Computer-Vision
taoqi98/FedSampling
Codes of FedSampling
ericyoc/adversarial-defense-cnn-poc
A classical or convolutional neural network model with adversarial defense protection
AnnaPallares/Trustworthy-AI
Trustworthy Artificial Intelligence Course Notebooks, 2023
KaiyuanZh/FLIP
[ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
ehsannowroozi/FederatedLearning_Poison_LF_FP
Contains the simulation code for Label Flipping (LF) and Feature Poisoning (FP) attacks against Federated Learning in Computer Networks.
griffisben/griffis_soccer_analysis
A collection of different functions I use to analyze soccer/football
Erfandarzi/ARMOR
robust federated learning and adversarial attacks
mckayjohns/complete-football-analytics
warisgill/FedDefender
FedDefender is a novel defense mechanism designed to safeguard Federated Learning from the poisoning attacks (i.e., backdoor attacks).
sofiaumea/FLAdversarialAttacks
A Federated Learning system using regression neural networks with adversarial attacks implemented
shivambang/Adversarial-Federated-Learning
THUYimingLi/backdoor-learning-resources
A list of backdoor learning resources