A repo lists papers about LLM.
- In-Context Learning
-
[2023/05/28] Mitigating Label Biases for In-context Learning | [paper] | [code]
-
[2023/05/16] What In-Context Learning "Learns" In-Context: Disentangling Task Recognition and Task Learning | [paper] | [code]
-
[2021/02/19] Calibrate Before Use: Improving Few-Shot Performance of Language Models | [paper] | [code]
-
-
Instruction-Tuning
-
Alignment
-
[2023/06/02] Fine-Grained Human Feedback Gives Better Rewards for Language Model Training | [paper] | [code]
-
[2023/05/17] SLiC-HF: Sequence Likelihood Calibration with Human Feedback | [paper] | [code]
-
[2023/04/11] RRHF: Rank Responses to Align Language Models with Human Feedback without tears | [paper] | [code]
-
[2022/03/31] BRIO: Bringing Order to Abstractive Summarization | [paper] | [code]
-