Pinned Repositories
AdvUnlearn
Official implementation of NeurIPS'24 paper "Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models". This work adversarially unlearns the text encoder to enhance the robustness of unlearned DMs against adversarial prompt attacks and achieves a better balance between unlearning performance and image generation
BiP
[NeurIPS22] "Advancing Model Pruning via Bi-level Optimization" by Yihua Zhang*, Yuguang Yao*, Parikshit Ram, Pu Zhao, Tianlong Chen, Mingyi Hong, Yanzhi Wang, and Sijia Liu
DeepZero
[ICLR'24] "DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training" by Aochuan Chen*, Yimeng Zhang*, Jinghan Jia, James Diffenderfer, Jiancheng Liu, Konstantinos Parasyris, Yihua Zhang, Zheng Zhang, Bhavya Kailkhura, Sijia Liu
Diffusion-MU-Attack
The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images ... For Now". This work introduces one fast and effective attack method to evaluate the harmful-content generation ability of safety-driven unlearned diffusion models.
Fast-BAT
[ICML22] "Revisiting and Advancing Fast Adversarial Training through the Lens of Bi-level Optimization" by Yihua Zhang*, Guanhua Zhang*, Prashant Khanduri, Mingyi Hong, Shiyu Chang, and Sijia Liu
ILM-VP
[CVPR23] "Understanding and Improving Visual Prompting: A Label-Mapping Perspective" by Aochuan Chen, Yuguang Yao, Pin-Yu Chen, Yihua Zhang, and Sijia Liu
Robust-MoE-CNN
[ICCV23] Robust Mixture-of-Expert Training for Convolutional Neural Networks by Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Huan Zhang, Pin-Yu Chen, Shiyu Chang, Zhangyang (Atlas) Wang, Sijia Liu
Unlearn-Saliency
[ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation" by Chongyu Fan*, Jiancheng Liu*, Yihua Zhang, Eric Wong, Dennis Wei, Sijia Liu
Unlearn-Sparse
[NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gaowen Liu, Yang Liu, Pranay Sharma, Sijia Liu
UnlearnCanvas
[NeurIPS 2024 D&B Track] UnlearnCanvas: A Stylized Image Dataset to Benchmark Machine Unlearning for Diffusion Models by Yihua Zhang, Chongyu Fan, Yimeng Zhang, Yuguang Yao, Jinghan Jia, Jiancheng Liu, Gaoyuan Zhang, Gaowen Liu, Ramana Kompella, Xiaoming Liu, Sijia Liu
OPTML Group's Repositories
OPTML-Group/BiP
[NeurIPS22] "Advancing Model Pruning via Bi-level Optimization" by Yihua Zhang*, Yuguang Yao*, Parikshit Ram, Pu Zhao, Tianlong Chen, Mingyi Hong, Yanzhi Wang, and Sijia Liu
OPTML-Group/Unlearn-Saliency
[ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation" by Chongyu Fan*, Jiancheng Liu*, Yihua Zhang, Eric Wong, Dennis Wei, Sijia Liu
OPTML-Group/Fast-BAT
[ICML22] "Revisiting and Advancing Fast Adversarial Training through the Lens of Bi-level Optimization" by Yihua Zhang*, Guanhua Zhang*, Prashant Khanduri, Mingyi Hong, Shiyu Chang, and Sijia Liu
OPTML-Group/Unlearn-Sparse
[NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gaowen Liu, Yang Liu, Pranay Sharma, Sijia Liu
OPTML-Group/Diffusion-MU-Attack
The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images ... For Now". This work introduces one fast and effective attack method to evaluate the harmful-content generation ability of safety-driven unlearned diffusion models.
OPTML-Group/UnlearnCanvas
[NeurIPS 2024 D&B Track] UnlearnCanvas: A Stylized Image Dataset to Benchmark Machine Unlearning for Diffusion Models by Yihua Zhang, Chongyu Fan, Yimeng Zhang, Yuguang Yao, Jinghan Jia, Jiancheng Liu, Gaoyuan Zhang, Gaowen Liu, Ramana Kompella, Xiaoming Liu, Sijia Liu
OPTML-Group/ILM-VP
[CVPR23] "Understanding and Improving Visual Prompting: A Label-Mapping Perspective" by Aochuan Chen, Yuguang Yao, Pin-Yu Chen, Yihua Zhang, and Sijia Liu
OPTML-Group/DeepZero
[ICLR'24] "DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training" by Aochuan Chen*, Yimeng Zhang*, Jinghan Jia, James Diffenderfer, Jiancheng Liu, Konstantinos Parasyris, Yihua Zhang, Zheng Zhang, Bhavya Kailkhura, Sijia Liu
OPTML-Group/Robust-MoE-CNN
[ICCV23] Robust Mixture-of-Expert Training for Convolutional Neural Networks by Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Huan Zhang, Pin-Yu Chen, Shiyu Chang, Zhangyang (Atlas) Wang, Sijia Liu
OPTML-Group/AdvUnlearn
Official implementation of NeurIPS'24 paper "Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models". This work adversarially unlearns the text encoder to enhance the robustness of unlearned DMs against adversarial prompt attacks and achieves a better balance between unlearning performance and image generation
OPTML-Group/QF-Attack
[CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liu
OPTML-Group/Unlearn-Simple
"Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning" by Chongyu Fan*, Jiancheng Liu*, Licong Lin*, Jinghan Jia, Ruiqi Zhang, Song Mei, Sijia Liu
OPTML-Group/Unlearn-WorstCase
[ECCV24] "Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning" by Chongyu Fan*, Jiancheng Liu*, Alfred Hero, Sijia Liu
OPTML-Group/SOUL
Official repo for EMNLP'24 paper "SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning"
OPTML-Group/DP4TL
[NeurIPS2023] "Selectivity Drives Productivity: Efficient Dataset Pruning for Enhanced Transfer Learning" by Yihua Zhang*, Yimeng Zhang*, Aochuan Chen*, Jinghan Jia, Jiancheng Liu, Gaowen Liu, Mingyi Hong, Shiyu Chang, Sijia Liu
OPTML-Group/RED-adv
[WACV25] "Can Adversarial Examples Be Parsed to Reveal Victim Model Information?" by Yuguang Yao*, Jiancheng Liu*, Yifan Gong*, Xiaoming Liu, Yanzhi Wang, Xue Lin, Sijia Liu
OPTML-Group/WAGLE
Official repo for NeurIPS'24 paper "WAGLE: Strategic Weight Attribution for Effective and Modular Unlearning in Large Language Models"
OPTML-Group/BiBadDiff
"From Trojan Horses to Castle Walls: Unveiling Bilateral Backdoor Effects in Diffusion Models" by Zhuoshi Pan*, Yuguang Yao*, Gaowen Liu, Bingquan Shen, H. Vicky Zhao, Ramana Rao Kompella, Sijia Liu
OPTML-Group/Black-Box-Defense
[ICLR22] "How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective" by Yimeng Zhang, Yuguang Yao, Jinghan Jia, Jinfeng Yi, Mingyi Hong, Shiyu Chang, Sijia Liu
OPTML-Group/CLAW-SAT
[SANER 2023] CLAWSAT: Towards Both Robust and Accurate Code Models.
OPTML-Group/RED-ICLR22
[ICLR22] "Reverse Engineering of Imperceptible Adversarial Image Perturbations" by Yifan Gong*, Yuguang Yao*, Yize Li, Yimeng Zhang, Xiaoming Liu, Xue Lin, Sijia Liu
OPTML-Group/BackdoorMSPC
[ICLR2024]"Backdoor Secrets Unveiled: Identifying Backdoor Data with Optimized Scaled Prediction Consistency" by Soumyadeep Pal, Yuguang Yao, Ren Wang, Bingquan Shen, Sijia Liu
OPTML-Group/Fairness-Reprogramming
[NeurIPS 22] "Fairness Reprogramming" by Guanhua Zhang*, Yihua Zhang*, Yang Zhang, Wenqi Fan, Qing Li, Sijia Liu, Shiyu Chang
OPTML-Group/BLO-Toolbox
OPTML-Group/BLOC-IRM
OPTML-Group/OPTML-Group.github.io
OPTML-Group/.github