Attention Is All You Need
BLOOM: BigScience 176B Model
Chain-of-thought Prompting Elicits Reasoning in Large Language Models
PAL: Program-aided Language Models
ReAct: Synergizing Reasoning and Acting in Language Models
Scaling Laws for Neural Language Models
What Language Model Architecture and Pretraining Objective Work Best for Zero-Shot Generalization?
LLaMA: Open and Efficient Foundation Language Models
Language Models are Few-Shot Learners
Training Compute-Optimal Large Language Models
BloombergGPT: A Large Language Model for Finance
Scaling Instruction-Finetuned Language Models
Introducing FLAN: More generalizable Language Models with Instruction Fine-Tuning
Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning On the Effectiveness of Parameter-Efficient Fine-Tuning
LoRA Low-Rank Adaptation of Large Language Models
QLoRA: Efficient Finetuning of Quantized LLMs
The Power of Scale for Parameter-Efficient Prompt Tuning
Fine-tuning 20B LLMs with RLHF on a 24GB consumer GPU
Training language models to follow instructions with human feedback
Learning to summarize from human feedback
Proximal Policy Optimization Algorithms
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Constitutional AI: Harmlessness from AI Feedback
Holistic Evaluation of Language Model
General Language Understanding Evaluation (GLUE) benchmark
SuperGLUE
ROUGE: A Package for Automatic Evaluation of Summaries
Measuring Massive Multitask Language Understanding (MMLU)
BigBench-Hard - Beyond the Imitation Game: Quantifying and Extrapolating the Capabilities of Language Models
ReAct: Synergizing Reasoning and Acting in Language Models
LangChain
Who Owns the Generative AI Platform?