AI4LIFE-GROUP
The AI4LIFE group at Harvard is led by Hima Lakkaraju. We study interpretability, fairness, privacy, and reliability of AI and ML models.
Pinned Repositories
disagreement-problem
Code repo for the disagreement problem paper
fair-unlearning
Fair Machine Unlearning: Data Removal while Mitigating Disparities
fair_ranking_effectiveness_on_outcomes
AIES 2021 Paper: Does Fair Ranking Imporve Minority Outcomes?
GraphXAI
GraphXAI: Resource to support the development and evaluation of GNN explainers
lfa
Local function approximation (LFA) framework, NeurIPS 2022
LLM_Explainer
Code for paper: Are Large Language Models Post Hoc Explainers?
OpenXAI
OpenXAI : Towards a Transparent Evaluation of Model Explanations
rise-against-distribution-shift
Code base for robust learning for an intersection of causal and adversarial shifts
ROAR
SpLiCE
Sparse Linear Concept Embeddings
AI4LIFE-GROUP's Repositories
AI4LIFE-GROUP/OpenXAI
OpenXAI : Towards a Transparent Evaluation of Model Explanations
AI4LIFE-GROUP/SpLiCE
Sparse Linear Concept Embeddings
AI4LIFE-GROUP/LLM_Explainer
Code for paper: Are Large Language Models Post Hoc Explainers?
AI4LIFE-GROUP/rise-against-distribution-shift
Code base for robust learning for an intersection of causal and adversarial shifts
AI4LIFE-GROUP/ROAR
AI4LIFE-GROUP/lfa
Local function approximation (LFA) framework, NeurIPS 2022
AI4LIFE-GROUP/DiET
Code for "Discriminative Feature Attributions via Distractor Erasure Tuning"
AI4LIFE-GROUP/disagreement-problem
Code repo for the disagreement problem paper
AI4LIFE-GROUP/fair-unlearning
Fair Machine Unlearning: Data Removal while Mitigating Disparities
AI4LIFE-GROUP/fair_ranking_effectiveness_on_outcomes
AIES 2021 Paper: Does Fair Ranking Imporve Minority Outcomes?
AI4LIFE-GROUP/GraphXAI
GraphXAI: Resource to support the development and evaluation of GNN explainers
AI4LIFE-GROUP/nifty
Code for paper https://arxiv.org/abs/2102.13186
AI4LIFE-GROUP/robust-grads
Code for https://arxiv.org/abs/2306.06716
AI4LIFE-GROUP/unified_representation
AI4LIFE-GROUP/arxiv-latex-cleaner
arXiv LaTeX Cleaner: Easily clean the LaTeX code of your paper to submit to arXiv
AI4LIFE-GROUP/average-case-robustness
Characterizing Data Point Vulnerability via Average-Case Robustness, UAI 2024
AI4LIFE-GROUP/lcnn
Low Curvature Neural Networks (NeurIPS 2022)
AI4LIFE-GROUP/UAI22_DataPoisoningAttacksonOff-PolicyPolicyEvaluationMethods_RL
DOPE: Data Poisoning Attacks on Off-Policy Policy Evaluation Methods
AI4LIFE-GROUP/amplify
AI4LIFE-GROUP/Balanced_Recourse
AI4LIFE-GROUP/CounterfactualDistanceAttack
"On the Privacy Risks of Algorithmic Recourse". Martin Pawelczyk, Himabindu Lakkaraju* and Seth Neel*. In International Conference on Artificial Intelligence and Statistics (AISTATS), PMLR, 2023.
AI4LIFE-GROUP/In-Context-Unlearning
"In-Context Unlearning: Language Models as Few Shot Unlearners". Martin Pawelczyk, Seth Neel* and Himabindu Lakkaraju*; arXiv preprint: arXiv:2310.07579; 2023.
AI4LIFE-GROUP/med-safety-bench
MedSafetyBench: Benchmark dataset for medical safety of LLMs
AI4LIFE-GROUP/ProbabilisticallyRobustRecourse
"Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness". M. Pawelczyk, T. Datta, J. v.d Heuvel, G. Kasneci, H. Lakkaraju. International Conference on Learning Representations 2023 (ICLR).
AI4LIFE-GROUP/rocerf_code
Source code for ROCERF