Pinned Repositories
bayesian-peft
[NeurIPS 2024] BLoB: Bayesian Low-Rank Adaptation by Backpropagation for Large Language Models
GRDA
[ICLR 2022] Graph-Relational Domain Adaptation
interpretable-foundation-models
[ICML 2024] Probabilistic Conceptual Explainers (PACE): Trustworthy Conceptual Explanations for Vision Foundation Models
llm-continual-learning-survey
Continual Learning of Large Language Models: A Comprehensive Survey
MMLU-SR
multimodal-needle-in-a-haystack
Code and data for the benchmark "Multimodal Needle in a Haystack (MMNeedle): Benchmarking Long-Context Capability of Multimodal Large Language Models"
TSDA
[ICML 2023] Taxonomy-Structured Domain Adaptation
unified-continual-learning
[NeurIPS 2023] A Unified Approach to Domain Incremental Learning with Memory: Theory and Algorithm
variational-imbalanced-regression
[NeurIPS 2023] Variational Imbalanced Regression: Fair Uncertainty Quantification via Probabilistic Smoothing
VDI
[ICLR 2023 (Spotlight)] Domain-Indexing Variational Bayes: Interpretable Domain Index for Domain Adaptation
ML@Rutgers's Repositories
Wang-ML-Lab/llm-continual-learning-survey
Continual Learning of Large Language Models: A Comprehensive Survey
Wang-ML-Lab/GRDA
[ICLR 2022] Graph-Relational Domain Adaptation
Wang-ML-Lab/VDI
[ICLR 2023 (Spotlight)] Domain-Indexing Variational Bayes: Interpretable Domain Index for Domain Adaptation
Wang-ML-Lab/multimodal-needle-in-a-haystack
Code and data for the benchmark "Multimodal Needle in a Haystack (MMNeedle): Benchmarking Long-Context Capability of Multimodal Large Language Models"
Wang-ML-Lab/unified-continual-learning
[NeurIPS 2023] A Unified Approach to Domain Incremental Learning with Memory: Theory and Algorithm
Wang-ML-Lab/bayesian-peft
[NeurIPS 2024] BLoB: Bayesian Low-Rank Adaptation by Backpropagation for Large Language Models
Wang-ML-Lab/TSDA
[ICML 2023] Taxonomy-Structured Domain Adaptation
Wang-ML-Lab/MMLU-SR
Wang-ML-Lab/interpretable-foundation-models
[ICML 2024] Probabilistic Conceptual Explainers (PACE): Trustworthy Conceptual Explanations for Vision Foundation Models
Wang-ML-Lab/multi-domain-active-learning
[AAAI 2024] Composite Active Learning: Towards Multi-Domain Active Learning with Theoretical Guarantees
Wang-ML-Lab/variational-imbalanced-regression
[NeurIPS 2023] Variational Imbalanced Regression: Fair Uncertainty Quantification via Probabilistic Smoothing
Wang-ML-Lab/ECBM
ICLR 2024: Energy-Based Concept Bottleneck Models: Unifying Prediction, Concept Intervention, and Probabilistic Interpretations
Wang-ML-Lab/Formal-LLM
Formal-LLM: Integrating Formal Language and Natural Language for Controllable LLM-based Agents
Wang-ML-Lab/train-free-uncertainty
Official Code for "Training-Free Uncertainty Estimation for Dense Regression: Sensitivity as a Surrogate, AAAI 2022"