🔥 Large Language Models(LLM) have taken the NLP community the Whole World by storm. Here is a curated list of papers about large language models, especially relating to ChatGPT. It also contains frameworks for LLM training, tools to deploy LLM, courses and tutorials about LLM and all publicly available LLM checkpoints and APIs:
If you're interested in the field of LLM, you may find the above list of milestone papers helpful to explore its history and state-of-the-art. However, each direction of LLM offers a unique set of insights and contributions, which are essential to understanding the field as a whole. For a detailed list of papers in various subfields, please refer to the following link (it is possible that there are overlaps between different subfields):
(:exclamation: We would greatly appreciate and welcome your contribution to the following list. ❗)
-
Evaluate different LLMs including ChatGPT in different fields
-
Hardware and software acceleration for LLM training and inference
-
Use LLM to do some really cool stuff
-
Augment LLM in different aspects including faithfulness, expressiveness, domain-specific knowledge etc.
-
Detect LLM-generated text from texts written by humans
-
Chain of thought—a series of intermediate reasoning steps—significantly improves the ability of large language models to perform complex reasoning.
-
Large language models (LLMs) demonstrate an in-context learning (ICL) ability, that is, learning from a few examples in the context.
-
Reinforcement Learning from Human Preference
-
A Good Prompt is Worth 1,000 Words
-
Finetune a language model on a collection of tasks described via instructions
There are three important steps for a ChatGPT-like LLM:
- Pre-training
- Instruction Tuning
- Alignment
The following list makes sure that all LLMs are compared apples to apples.
Model | Size | Architecture | Access | Date | Origin |
---|---|---|---|---|---|
Switch Transformer | 1.6T | Decoder(MOE) | - | 2021-01 | Paper |
GLaM | 1.2T | Decoder(MOE) | - | 2021-12 | Paper |
PaLM | 540B | Decoder | - | 2022-04 | Paper |
MT-NLG | 530B | Decoder | - | 2022-01 | Paper |
J1-Jumbo | 178B | Decoder | api | 2021-08 | Paper |
OPT | 175B | Decoder | api | ckpt | 2022-05 | Paper |
BLOOM | 176B | Decoder | api | ckpt | 2022-11 | Paper |
GPT 3.0 | 175B | Decoder | api | 2020-05 | Paper |
LaMDA | 137B | Decoder | - | 2022-01 | Paper |
GLM | 130B | Decoder | ckpt | 2022-10 | Paper |
YaLM | 100B | Decoder | ckpt | 2022-06 | Blog |
LLaMA | 65B | Decoder | ckpt | 2022-09 | Paper |
GPT-NeoX | 20B | Decoder | ckpt | 2022-04 | Paper |
UL2 | 20B | agnostic | ckpt | 2022-05 | Paper |
鹏程.盘古α | 13B | Decoder | ckpt | 2021-04 | Paper |
T5 | 11B | Encoder-Decoder | ckpt | 2019-10 | Paper |
CPM-Bee | 10B | Decoder | api | 2022-10 | Paper |
rwkv-4 | 7B | RWKV | ckpt | 2022-09 | Github |
GPT-J | 6B | Decoder | ckpt | 2022-09 | Github |
GPT-Neo | 2.7B | Decoder | ckpt | 2021-03 | Github |
GPT-Neo | 1.3B | Decoder | ckpt | 2021-03 | Github |
Model | Size | Architecture | Access | Date | Origin |
---|---|---|---|---|---|
Flan-PaLM | 540B | Decoder | - | 2022-10 | Paper |
BLOOMZ | 176B | Decoder | ckpt | 2022-11 | Paper |
InstructGPT | 175B | Decoder | api | 2022-03 | Paper |
Galactica | 120B | Decoder | ckpt | 2022-11 | Paper |
OpenChatKit | 20B | - | ckpt | 2023-3 | - |
Flan-UL2 | 20B | Decoder | ckpt | 2023-03 | Blog |
Gopher | - | - | - | - | - |
Chinchilla | - | - | - | - | - |
Flan-T5 | 11B | Encoder-Decoder | ckpt | 2022-10 | Paper |
T0 | 11B | Encoder-Decoder | ckpt | 2021-10 | Paper |
Alpaca | 7B | Decoder | demo | 2023-03 | Github |
Model | Size | Architecture | Access | Date | Origin |
---|---|---|---|---|---|
GPT 4 | - | - | - | 2023-03 | Blog |
ChatGPT | - | Decoder | demo|api | 2022-11 | Blog |
Sparrow | 70B | - | - | 2022-09 | Paper |
Claude | - | - | demo|api | 2023-03 | Blog |
Alpa is a system for training and serving large-scale neural networks. Scaling neural networks to hundreds of billions of parameters has enabled dramatic breakthroughs such as GPT-3, but training and serving these large-scale neural networks require complicated distributed system techniques. Alpa aims to automate large-scale distributed training and serving with just a few lines of code.
DeepSpeed is an easy-to-use deep learning optimization software suite that enables unprecedented scale and speed for DL Training and Inference. Visit us at deepspeed.ai or our Github repo.
Megatron-LM could be visited here. Megatron (1, 2, and 3) is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA. This repository is for ongoing research on training large transformer language models at scale. We developed efficient, model-parallel (tensor, sequence, and pipeline), and multi-node pre-training of transformer based models such as GPT, BERT, and T5 using mixed precision.
Colossal-AI provides a collection of parallel components for you. We aim to support you to write your distributed deep learning models just like how you write your model on your laptop. We provide user-friendly tools to kickstart distributed training and inference in a few lines.
BMTrain is an efficient large model training toolkit that can be used to train large models with tens of billions of parameters. It can train models in a distributed manner while keeping the code as simple as stand-alone training.
Mesh TensorFlow
(mtf)
is a language for distributed deep learning, capable of specifying a broad class of distributed tensor computations. The purpose of Mesh TensorFlow is to formalize and implement distribution strategies for your computation graph over your hardware/processors. For example: "Split the batch over rows of processors and split the units in the hidden layer across columns of processors." Mesh TensorFlow is implemented as a layer over TensorFlow.
This tutorial discusses parallelism via jax.Array.
💙 Haystack
Haystack is an open-source NLP framework that allows you to use LLMs and transformer-based models from Hugging Face, OpenAI and Cohere to interact with your own data. It supports 🔍 Semantic Search, 🤖 Agents, ❓ Question Answering, 📝 Summarization and a range of other applications.
💬 Sidekick
Sidekick is an open source ETL platform for building LLM apps. It lets you sync data from SaaS tools like Notion, Google Drive, Confluence, etc to a vector database through an easy-to-use dashboard, and gives you an API endpoint you can use to query data across all your data sources. It cuts down the time needed to build customer support bots, workplace search tools, and conversational interfaces using LLMs from days and weeks to hours.
🦜️🔗 LangChain
Large language models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. But using these LLMs in isolation is often not enough to create a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge. This library is aimed at assisting in the development of those types of applications. Common examples of these types of applications include ❓ Question Answering over specific documents, 💬 Chatbots and 🤖 Agents.
Use ChatGPT On Wechat via wechaty
- [Susan Zhang] Open Pretrained Transformers Youtube
- [Ameet Deshpande] How Does ChatGPT Work? Slides
- [Yao Fu] 预训练,指令微调,对齐,专业化:论大语言模型能力的来源 Bilibili
- [Hung-yi Lee] ChatGPT 原理剖析 Youtube
- [Jay Mody] GPT in 60 Lines of NumPy Link
- [ICML 2022] Welcome to the "Big Model" Era: Techniques and Systems to Train and Serve Bigger Models Link
- [NeurIPS 2022] Foundational Robustness of Foundation Models Link
- [Andrej Karpathy] Let's build GPT: from scratch, in code, spelled out. Video|Code
- [DAIR.AI] Prompt Engineering Guide Link
- [邱锡鹏] 大型语言模型的能力分析与应用 Slides | Video
- [Philipp Schmid] Fine-tune FLAN-T5 XL/XXL using DeepSpeed & Hugging Face Transformers Link
- [HuggingFace] Illustrating Reinforcement Learning from Human Feedback (RLHF) Link
- [HuggingFace] What Makes a Dialog Agent Useful? Link
- [张俊林]通向AGI之路:大型语言模型(LLM)技术精要 Link
- [大师兄]ChatGPT/InstructGPT详解 Link
- [HeptaAI]ChatGPT内核:InstructGPT,基于反馈指令的PPO强化学习 Link
- [Yao Fu] How does GPT Obtain its Ability? Tracing Emergent Abilities of Language Models to their Sources Link
- [Stephen Wolfram] What Is ChatGPT Doing … and Why Does It Work? Link
- [Jingfeng Yang] Why did all of the public reproduction of GPT-3 fail? Link
- [Hung-yi Lee] ChatGPT (可能)是怎麼煉成的 - GPT 社會化的過程 Video
- [Princeton] Understanding Large Language Models Homepage
- [OpenBMB] 大模型公开课 主页
- [Stanford] CS224N-Lecture 11: Prompting, Instruction Finetuning, and RLHF Slides
- [Stanford] CS324-Large Language Models Homepage
- [Stanford] CS25-Transformers United V2 Homepage
- [Stanford Webinar] GPT-3 & Beyond Video
- [李沐] InstructGPT论文精读 Bilibili Youtube
- [陳縕儂] OpenAI InstructGPT 從人類回饋中學習 ChatGPT 的前身 Youtube
- [李沐] HELM全面语言模型评测 Bilibili
- [李沐] GPT,GPT-2,GPT-3 论文精读 Bilibili Youtube
- [Aston Zhang] Chain of Thought论文 Bilibili Youtube
- [MIT] Introduction to Data-Centric AI Homepage
- Noam Chomsky: The False Promise of ChatGPT [2023-03-08][Noam Chomsky]
- Is ChatGPT 175 Billion Parameters? Technical Analysis [2023-03-04][Owen]
- Towards ChatGPT and Beyond [2023-02-20][知乎][欧泽彬]
- 追赶ChatGPT的难点与平替 [2023-02-19][李rumor]
- 对话旷视研究院张祥雨|ChatGPT的科研价值可能更大 [2023-02-16][知乎][旷视科技]
- 关于ChatGPT八个技术问题的猜想 [2023-02-15][知乎][张家俊]
- ChatGPT发展历程、原理、技术架构详解和产业未来 [2023-02-15][知乎][陈巍谈芯]
- 对ChatGPT的二十点看法 [2023-02-13][知乎][熊德意]
- ChatGPT-所见、所闻、所感 [2023-02-11][知乎][刘聪NLP]
- The Next Generation Of Large Language Models [2023-02-07][Forbes]
- Large Language Model Training in 2023 [2023-02-03][Cem Dilmegani]
- What Are Large Language Models Used For? [2023-01-26][NVIDIA]
- Large Language Models: A New Moore's Law [2021-10-26][Huggingface]
- Awesome ChatGPT Prompts - A collection of prompt examples to be used with the ChatGPT model.
- awesome-chatgpt-prompts-zh - A Chinese collection of prompt examples to be used with the ChatGPT model.
- Awesome ChatGPT - Curated list of resources for ChatGPT and GPT-3 from OpenAI.
- Chain-of-Thoughts Papers - A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models.
- Instruction-Tuning-Papers - A trend starts from
Natrural-Instruction
(ACL 2022),FLAN
(ICLR 2022) andT0
(ICLR 2022). - LLM Reading List - A paper & resource list of large language models.
- Reasoning using Language Models - Collection of papers and resources on Reasoning using Language Models.
- Chain-of-Thought Hub - Measuring LLMs' Reasoning Performance
- ShareGPT - Share your wildest ChatGPT conversations with one click.
- Major LLMs + Data Availability
- MOSS - a conversational language model like ChatGPT
- 500+ Best AI Tools
- Cohere Summarize Beta - Introducing Cohere Summarize Beta: A New Endpoint for Text Summarization
- chatgpt-wrapper - ChatGPT Wrapper is an open-source unofficial Python API and CLI that lets you interact with ChatGPT.
- Open-evals - A framework extend openai's Evals for different language model.
- Cursor - Write, edit, and chat about your code with a powerful AI.
This is an active repository and your contributions are always welcome!
I will keep some pull requests open if I'm not sure if they are awesome for LLM, you could vote for them by adding 👍 to them.
If you have any question about this opinionated list, do not hesitate to contact me chengxin1998@stu.pku.edu.cn.