Pinned Repositories
composed_finetuning
Code for the ICML 2021 paper "Composed Fine-Tuning: Freezing Pre-Trained Denoising Autoencoders for Improved Generalization" by Sang Michael Xie, Tengyu Ma, Percy Liang
dsir
DSIR large-scale data selection framework for language model training
gradual_domain_adaptation
in-n-out
Code for the ICLR 2021 Paper "In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness"
incontext-learning
Experiments and code to generate the GINC small-scale in-context learning dataset from "An Explanation for In-context Learning as Implicit Bayesian Inference"
jukemir
Perform transfer learning for MIR using Jukebox!
robust_tradeoff
Code for the ICML 2020 paper "Understanding and Mitigating the Tradeoff Between Robustness and Accuracy", Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John Duchi, and Percy Liang. Paper available at https://arxiv.org/pdf/2002.10716.pdf.
swords
The Stanford Word Substitution (Swords) Benchmark
verified_calibration
Calibration library and code for the paper: Verified Uncertainty Calibration. Ananya Kumar, Percy Liang, Tengyu Ma. NeurIPS 2019 (Spotlight).
wilds
A machine learning benchmark of in-the-wild distribution shifts, with data loaders, evaluators, and default models.
P-Lambda's Repositories
p-lambda/wilds
A machine learning benchmark of in-the-wild distribution shifts, with data loaders, evaluators, and default models.
p-lambda/dsir
DSIR large-scale data selection framework for language model training
p-lambda/jukemir
Perform transfer learning for MIR using Jukebox!
p-lambda/verified_calibration
Calibration library and code for the paper: Verified Uncertainty Calibration. Ananya Kumar, Percy Liang, Tengyu Ma. NeurIPS 2019 (Spotlight).
p-lambda/incontext-learning
Experiments and code to generate the GINC small-scale in-context learning dataset from "An Explanation for In-context Learning as Implicit Bayesian Inference"
p-lambda/gradual_domain_adaptation
p-lambda/swords
The Stanford Word Substitution (Swords) Benchmark
p-lambda/in-n-out
Code for the ICLR 2021 Paper "In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness"
p-lambda/robust_tradeoff
Code for the ICML 2020 paper "Understanding and Mitigating the Tradeoff Between Robustness and Accuracy", Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John Duchi, and Percy Liang. Paper available at https://arxiv.org/pdf/2002.10716.pdf.
p-lambda/composed_finetuning
Code for the ICML 2021 paper "Composed Fine-Tuning: Freezing Pre-Trained Denoising Autoencoders for Improved Generalization" by Sang Michael Xie, Tengyu Ma, Percy Liang
p-lambda/LinkBERT
[ACL 2022] LinkBERT: A Knowledgeable Language Model 😎 Pretrained with Document Links
p-lambda/dragon
[NeurIPS 2022] DRAGON 🐲: Deep Bidirectional Language-Knowledge Graph Pretraining