GMU Mingrui Liu (ML) Lab
This is Dr. Mingrui Liu's Machine Learning research group at George Mason University
Pinned Repositories
Bilevel-Coreset-Selection-via-Regularization
[NeurIPS 2023] Bilevel Coreset Selection in Continual Learning: A New Formulation and Algorithm
Bilevel-Optimization-under-Unbounded-Smoothness
[ICLR 2024, Spotlight] Bilevel Optimization under Unbounded Smoothness: A New Algorithm and Convergence Analysis
Communication-Efficient-Local-Gradient-Clipping
[NeurIPS 2022] A Communication-Efficient Distributed Gradient Clipping Algorithm for Training Deep Neural Networks
episode
[ICLR 2023] EPISODE: Episodic Gradient Clipping with Periodic Resampled Corrections for Federated Learning with Heterogeneous Data
episode_plusplus
[NeurIPS 2023] Federated Learning with Client Subsampling, Data Heterogeneity, and Unbounded Smoothness: A New Algorithm and Lower Bounds
Federated-Sparse-Learning
[ICML 2022] Fast Composite Optimization and Statistical Recovery in Federated Learning
Lifelong-AUC
[UAI2023] AUC Maximization in Imbalanced Lifelong Learning
Provable-Benefit-Local-Steps-Feature-Learning
[ICML 2024] Provable Benefits of Local Steps in Heterogeneous Federated Learning for Neural Networks: A Feature Learning Perspective
Single-Loop-bilevel-Optimizer-under-Unbounded-Smoothness
[ICML 2024] A Nearly Optimal Single Loop Algorithm for Stochastic Bilevel Optimization under Unbounded Smoothness
GMU Mingrui Liu (ML) Lab's Repositories
MingruiLiu-ML-Lab/Federated-Sparse-Learning
[ICML 2022] Fast Composite Optimization and Statistical Recovery in Federated Learning
MingruiLiu-ML-Lab/Bilevel-Coreset-Selection-via-Regularization
[NeurIPS 2023] Bilevel Coreset Selection in Continual Learning: A New Formulation and Algorithm
MingruiLiu-ML-Lab/episode
[ICLR 2023] EPISODE: Episodic Gradient Clipping with Periodic Resampled Corrections for Federated Learning with Heterogeneous Data
MingruiLiu-ML-Lab/Bilevel-Optimization-under-Unbounded-Smoothness
[ICLR 2024, Spotlight] Bilevel Optimization under Unbounded Smoothness: A New Algorithm and Convergence Analysis
MingruiLiu-ML-Lab/Communication-Efficient-Local-Gradient-Clipping
[NeurIPS 2022] A Communication-Efficient Distributed Gradient Clipping Algorithm for Training Deep Neural Networks
MingruiLiu-ML-Lab/episode_plusplus
[NeurIPS 2023] Federated Learning with Client Subsampling, Data Heterogeneity, and Unbounded Smoothness: A New Algorithm and Lower Bounds
MingruiLiu-ML-Lab/Lifelong-AUC
[UAI2023] AUC Maximization in Imbalanced Lifelong Learning
MingruiLiu-ML-Lab/Provable-Benefit-Local-Steps-Feature-Learning
[ICML 2024] Provable Benefits of Local Steps in Heterogeneous Federated Learning for Neural Networks: A Feature Learning Perspective
MingruiLiu-ML-Lab/Single-Loop-bilevel-Optimizer-under-Unbounded-Smoothness
[ICML 2024] A Nearly Optimal Single Loop Algorithm for Stochastic Bilevel Optimization under Unbounded Smoothness