Pinned Repositories
ARMA-Networks
Dynamics-Aware-Robust-Training
ICLR 2023 paper "Exploring and Exploiting Decision Boundary Dynamics for Adversarial Robustness" by Yuancheng Xu, Yanchao Sun, Micah Goldblum, Tom Goldstein and Furong Huang
Mementos
paad_adv_rl
Code for ICLR 2022 publication: Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL. https://openreview.net/forum?id=JM2kFbJvvI
perceptionCLIP
Code for our ICLR 2024 paper "PerceptionCLIP: Visual Classification by Inferring and Conditioning on Contexts"
SWIFT
SWIFT: Shared WaIt Free Transmission
tuformer
VLM-Poisoning
Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"
WAVES
Code for our paper "Benchmarking the Robustness of Image Watermarks"
WocaR-RL
Efficient Adversarial Training without Attacking: Worst-Case-Aware Robust Reinforcement Learning
Furong's Lab's Repositories
umd-huang-lab/WAVES
Code for our paper "Benchmarking the Robustness of Image Watermarks"
umd-huang-lab/perceptionCLIP
Code for our ICLR 2024 paper "PerceptionCLIP: Visual Classification by Inferring and Conditioning on Contexts"
umd-huang-lab/tracevla
umd-huang-lab/VLM-Poisoning
Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"
umd-huang-lab/Mementos
umd-huang-lab/Dynamics-Aware-Robust-Training
ICLR 2023 paper "Exploring and Exploiting Decision Boundary Dynamics for Adversarial Robustness" by Yuancheng Xu, Yanchao Sun, Micah Goldblum, Tom Goldstein and Furong Huang
umd-huang-lab/WocaR-RL
Efficient Adversarial Training without Attacking: Worst-Case-Aware Robust Reinforcement Learning
umd-huang-lab/SWIFT
SWIFT: Shared WaIt Free Transmission
umd-huang-lab/ARMA-Networks
umd-huang-lab/paad_adv_rl
Code for ICLR 2022 publication: Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL. https://openreview.net/forum?id=JM2kFbJvvI
umd-huang-lab/SIMA
umd-huang-lab/tuformer
umd-huang-lab/private-topic-model-tensor-methods
We provide an end-to-end differentially pri- vate spectral algorithm for learning LDA, based on matrix/tensor decompositions, and establish theoretical guarantees on util- ity/consistency of the estimated model pa- rameters. The spectral algorithm consists of multiple algorithmic steps, named as “edges”, to which noise could be injected to obtain differential privacy. We identify subsets of edges, named as “configurations”, such that adding noise to all edges in such a subset guarantees differential privacy of the end-to-end spectral algorithm. We character- ize the sensitivity of the edges with respect to the input and thus estimate the amount of noise to be added to each edge for any required privacy level. We then character- ize the utility loss for each configuration as a function of injected noise. Overall, by com- bining the sensitivity and utility characteri- zation, we obtain an end-to-end differentially private spectral algorithm for LDA and iden- tify the corresponding configuration that out- performs others in any specific regime. We are the first to achieve utility guarantees un- der the required level of differential privacy for learning in LDA. Overall our method sys- tematically outperforms differentially private variational inference.
umd-huang-lab/cmarl_ame
Implementation of ICLR'23 publication "Certifiably Robust Policy Learning against Adversarial Multi-Agent Communication".
umd-huang-lab/Tensorial-Neural-Networks
We implement tensorial neural networks (TNNs), a generalization of existing neural networks by extending tensor operations on low order operands to those on high order operands.
umd-huang-lab/transfer-fairness
A self-training method for transferring fairness under distribution shifts.
umd-huang-lab/Transfer-Q
Official repository of the paper "Transfer Q Star: Principled Decoding for LLM Alignment"
umd-huang-lab/transfer_across_obs
Code for paper "Transfer RL across Observation Feature Spaces via Model-Based Regularization". https://openreview.net/forum?id=7KdAoOsI81C
umd-huang-lab/Easy2Hard-Bench
Official repository of the paper "Easy2Hard-Bench: Standardized Difficulty Labels for Profiling LLM Performance and Generalization" in NeurIPS 2024 Track Datasets and Benchmarks
umd-huang-lab/PROTECTED
Code for paper "Beyond Worst-case Attacks: Robust RL with Adaptive Defense via Non-dominated Policies" by Xiangyu Liu, Chenghao Deng, Yanchao Sun, Yongyuan Liang, Furong Huang
umd-huang-lab/RealFM
Repository for RealFM: A Realistic Mechanism to Incentivize Federated Participation and Contribution
umd-huang-lab/COPlanner
umd-huang-lab/FalseRefusal
Automatic Pseudo-Harmful Prompt Generation for Evaluating False Refusals in Large Language Models
umd-huang-lab/ortho-conv
umd-huang-lab/poison-rl
Code for paper Vulnerability-Aware Poisoning Mechanism for Online RL with Unknown Dynamics. https://arxiv.org/abs/2009.00774
umd-huang-lab/ELBERT
Official Implementation of the paper "Equal Long-term Benefit Rate: Adapting Static Fairness Notions to Sequential Decision Making" by Yuancheng Xu, Chenghao Deng, Yanchao Sun, Ruijie Zheng, Xiyao Wang, Jieyu Zhao and Furong Huang.
umd-huang-lab/evasion-rl
umd-huang-lab/PDML
umd-huang-lab/PRISE
umd-huang-lab/TACO
Code for "TACO: Temporal Latent Action-Driven Contrastive Loss for Visual Reinforcement Learning"