/Awesome-Code-LLM

A curated list of language modeling researches for code and related datasets.

Awesome-Code-LLM

This is the repo for our survey Unifying the Perspectives of NLP and Software Engineering: A Survey on Language Models for Code - a comprehensive review of LLM researches for code. Works in each category are ordered chronologically. If you have a basic understanding of machine learning but are new to NLP, we also provide a list of recommended readings in section 6.

News

🔥🔥🔥 [2024/03] We included a new downstream task: frontend development & web agents.

🔥🔥     [2024/03] Claude 3 is out, with 84.9 reported performance on HumanEval: The Claude 3 Model Family.

🔥🔥     [2024/03] We included a new downstream task: compiler optimization.

🔥🔥     [2024/02] StarCoder 2 and The Stack v2: The Next Generation.

🔥         [2024/02] Google open-sourced Gemma.

🔥         [2024/02] Amazon ICLR 2024 paper: Code Representation Learning At Scale.

Table of Contents

  1. Surveys

  2. Models

    2.1 Off-the-Shelf LLM

    2.2 Existing LLM Further Trained on Code

    2.3 General Pretraining on Code

    2.4 Instruction Fine-Tuning on Code

    2.5 Reinforcement Learning on Code

  3. When Coding Meets Reasoning

    3.1 Coding for Reasoning

    3.2 Coding via Planning

  4. Methods/Models for Downstream Tasks

  5. Datasets

    5.1 Pretraining

    5.2 Benchmarks

  6. Recommended Readings

  7. Citation

  8. Star History

1. Surveys

We list six recent surveys on similar topics. While they are all about language models for code, the first two focus on NLP side, and the later four focus on SE side.

  1. "Large Language Models Meet NL2Code: A Survey", 2022-12, ACL 2023, [paper]

  2. "A Survey on Pretrained Language Models for Neural Code Intelligence", 2022-12, arXiv, [paper]

  3. "An Empirical Comparison of Pre-Trained Models of Source Code", 2023-02, ICSE 2023, [paper]

  4. "Large Language Models for Software Engineering: A Systematic Literature Review", 2023-08, arXiv, [paper]

  5. "Towards an Understanding of Large Language Models in Software Engineering Tasks", 2023-08, arXiv, [paper]

  6. "Pitfalls in Language Models for Code Intelligence: A Taxonomy and Survey", 2023-10, arXiv, [paper]

2. Models

2.1 Off-the-Shelf LLM

These LLMs are not specifically trained for code, but have demonstrated varying coding capability.

  1. LaMDA: "LaMDA: Language Models for Dialog Applications", 2022-01, arXiv, [paper]

  2. PaLM: "PaLM: Scaling Language Modeling with Pathways", 2022-04, arXiv, [paper]

  3. GPT-NeoX: "GPT-NeoX-20B: An Open-Source Autoregressive Language Model", 2022-04, ACL 2022 Workshop on Challenges & Perspectives in Creating Large Language Models, [paper] [repo]

  4. BLOOM: "BLOOM: A 176B-Parameter Open-Access Multilingual Language Model", 2022-11, arXiv, [paper] [model]

  5. LLaMA: "LLaMA: Open and Efficient Foundation Language Models", 2023-02, arXiv, [paper]

  6. GPT-4: "GPT-4 Technical Report", 2023-03, arXiv, [paper]

  7. LLaMA 2: "Llama 2: Open Foundation and Fine-Tuned Chat Models", 2023-07, arXiv, [paper] [repo]

  8. Phi-1.5: "Textbooks Are All You Need II: phi-1.5 technical report", 2023-09, arXiv, [paper] [model]

  9. Baichuan 2: "Baichuan 2: Open Large-scale Language Models", 2023-09, arXiv, [paper] [repo]

  10. Qwen: "Qwen Technical Report", 2023-09, arXiv, [paper] [repo]

  11. Mistral: "Mistral 7B", 2023-10, arXiv, [paper] [repo]

  12. Gemini: "Gemini: A Family of Highly Capable Multimodal Models", 2023-12, arXiv, [paper]

  13. Phi-2: "Phi-2: The surprising power of small language models", 2023-12, arXiv, [blog]

  14. YAYI2: "YAYI 2: Multilingual Open-Source Large Language Models", 2023-12, arXiv, [paper] [repo]

  15. DeepSeek: "DeepSeek LLM: Scaling Open-Source Language Models with Longtermism", 2024-01, arXiv, [paper] [repo]

  16. Mixtral: "Mixtral of Experts", 2024-01, arXiv, [paper] [blog]

  17. DeepSeekMoE: "DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models", 2024-01, arXiv, [paper] [repo]

  18. Orion: "Orion-14B: Open-source Multilingual Large Language Models", 2024-01, arXiv, [paper] [repo]

  19. OLMo: "OLMo: Accelerating the Science of Language Models", 2024-02, arXiv, [paper] [repo]

  20. Gemma: "Gemma: Open Models Based on Gemini Research and Technology", 2024-02, [paper] [blog]

  21. Claude 3: "The Claude 3 Model Family: Opus, Sonnet, Haiku", 2024-03, [paper] [blog]

2.2 Existing LLM Further Trained on Code

These models are general-purpose LLMs further pretrained on code-related data.

  1. Codex (GPT-3): "Evaluating Large Language Models Trained on Code", 2021-07, arXiv, [paper]

  2. PaLM Coder (PaLM): "PaLM: Scaling Language Modeling with Pathways", 2022-04, arXiv, [paper]

  3. Minerva (PaLM): "Solving Quantitative Reasoning Problems with Language Models", 2022-06, arXiv, [paper]

  4. PaLM 2 * (PaLM 2): "PaLM 2 Technical Report", 2023-05, arXiv, [paper]

  5. Code LLaMA (LLaMA 2): "Code Llama: Open Foundation Models for Code", 2023-08, arXiv, [paper] [repo]

2.3 General Pretraining on Code

These models are Transformer encoders, decoders, and encoder-decoders pretrained from scratch using existing objectives for general language modeling.

Encoder

  1. CuBERT (MLM + NSP): "Learning and Evaluating Contextual Embedding of Source Code", 2019-12, ICML 2020, [paper] [repo]

  2. CodeBERT (MLM + RTD): "CodeBERT: A Pre-Trained Model for Programming and Natural Languages", 2020-02, EMNLP findings 2020, [paper] [repo]

  3. GraphCodeBERT (MLM + DFG Edge Prediction + DFG Node Alignment): "GraphCodeBERT: Pre-training Code Representations with Data Flow", 2020-09, ICLR 2021, [paper] [repo]

  4. SynCoBERT (MLM + Identifier Prediction + AST Edge Prediction + Contrastive Learning): "SynCoBERT: Syntax-Guided Multi-Modal Contrastive Pre-Training for Code Representation", 2021-08, arXiv, [paper]

  5. DISCO (MLM + Node Type MLM + Contrastive Learning): "Towards Learning (Dis)-Similarity of Source Code from Program Contrasts", 2021-q0, ACL 2022, [paper]

  6. Code-MVP (MLM + Type Inference + Contrastive Learning): "CODE-MVP: Learning to Represent Source Code from Multiple Views with Contrastive Pre-Training", 2022-05, NAACL 2022 Technical Track, [paper]

  7. CodeSage (MLM + Deobfuscation + Contrastive Learning): "Code Representation Learning At Scale", 2024-02, ICLR 2024, [paper]

Decoder

  1. GPT-C (CLM): "IntelliCode Compose: Code Generation Using Transformer", 2020-05, ESEC/FSE 2020, [paper]

  2. CodeGPT (CLM): "CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation", 2021-02, NeurIPS Datasets and Benchmarks 2021, [paper] [repo]

  3. CodeParrot (CLM), 2021-12, [blog]

  4. PolyCoder (CLM): "A Systematic Evaluation of Large Language Models of Code", 2022-02, - DL4C@ICLR 2022, [paper] [repo]

  5. CodeGen (CLM): "CodeGen: An Open Large Language Model for Code with Multi-Turn Program Synthesis", 2022-03, ICLR 2023, [paper] [repo]

  6. InCoder (Causal Masking): "InCoder: A Generative Model for Code Infilling and Synthesis", 2022-04, ICLR 2023, [paper] [repo]

  7. PyCodeGPT (CLM): "CERT: Continual Pre-Training on Sketches for Library-Oriented Code Generation", 2022-06, IJCAI-ECAI 2022, [paper] [repo]

  8. PanGu-Coder (CLM): "PanGu-Coder: Program Synthesis with Function-Level Language Modeling", 2022-07, arxiv, [paper]

  9. SantaCoder (FIM): "SantaCoder: don't reach for the stars!", 2023-01, arXiv, [paper] [model]

  10. CodeGeeX (CLM): "CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Evaluations on HumanEval-X", 2023-03, arxiv, [paper] [repo]

  11. StarCoder (FIM): "StarCoder: may the source be with you!", 2023-05, arXiv, [paper] [model]

  12. Phi-1 (CLM): "Textbooks Are All You Need", 2023-06, arxiv, [paper] [model]

  13. CodeFuse (CLM): "CodeFuse-13B: A Pretrained Multi-lingual Code Large Language Model", 2023-10, arxiv, [paper] [model]

  14. CodeShell (CLM), 2023-10, [repo]

  15. DeepSeek Coder (CLM+FIM): "DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence", 2024-01, arXiv, [paper][repo]

  16. StarCoder2 (CLM+FIM): "StarCoder 2 and The Stack v2: The Next Generation", 2024-02, arXiv, [paper][repo]

Encoder-Decoder

  1. PyMT5 (Span Corruption): "PyMT5: multi-mode translation of natural language and Python code with transformers", 2020-10, EMNLP 2020, [paper]

  2. Mastropaolo et al. (MLM + Deobfuscation): "DOBF: A Deobfuscation Pre-Training Objective for Programming Languages", 2021-02, ICSE 2021, [paper] [repo]

  3. DOBF (Span Corruption): "Studying the Usage of Text-To-Text Transfer Transformer to Support Code-Related Tasks", 2021-02, NeurIPS 2021, [paper] [repo]

  4. PLBART (DAE): "Unified Pre-training for Program Understanding and Generation", 2021-03, NAACL 2021, [paper] [repo]

  5. CodeT5 (Span Corruption + Identifier Tagging + Masked Identifier Prediction + Text2Code + Code2Text): "CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation", 2021-09, EMNLP 2021, [paper] [repo]

  6. SPT-Code (Span Corruption + NSP + Method Name Prediction): "SPT-Code: Sequence-to-Sequence Pre-Training for Learning Source Code Representations", 2022-01, ICSE 2022 Technical Track, [paper]

  7. AlphaCode (MLM + CLM): "Competition-Level Code Generation with AlphaCode", 2022-02, Science, [paper] [arxiv]

  8. NatGen (Code Naturalization): "NatGen: Generative pre-training by "Naturalizing" source code", 2022-06, ESEC/FSE 2022, [paper] [repo]

  9. ERNIE-Code (Span Corruption + Pivot-based Translation LM): "ERNIE-Code: Beyond English-Centric Cross-lingual Pretraining for Programming Languages", 2022-12, ACL23 (Findings), [paper][repo]

  10. CodeT5+ (Span Corruption + CLM + Text-Code Contrastive Learning + Text-Code Translation): "CodeT5+: Open Code Large Language Models for Code Understanding and Generation", 2023-05, arXiv, [paper] [repo]

  11. AST-T5 (Span Corruption): "AST-T5: Structure-Aware Pretraining for Code Generation and Understanding", 2024-01, arXiv, [paper]

UniLM

  1. CugLM (MLM + NSP + CLM): "Multi-task Learning based Pre-trained Language Model for Code Completion", 2020-12, ASE 2020, [paper]

  2. UniXcoder (MLM + NSP + CLM + Span Corruption + Contrastive Learning + Code2Text): "UniXcoder: Unified Cross-Modal Pre-training for Code Representation", 2022-03, ACL 2022, [paper] [repo]

2.4 Instruction Fine-Tuning on Code

These models apply Instruction Fine-Tuning techniques to enhance the capacities of Code LLMs.

  1. WizardCoder (StarCoder + Evol-Instruct): "WizardCoder: Empowering Code Large Language Models with Evol-Instruct", 2023-06, arXiv, [paper] [repo]

  2. PanGu-Coder 2 (StarCoder + Evol-Instruct + RRTF): "PanGu-Coder2: Boosting Large Language Models for Code with Ranking Feedback", 2023-07, arXiv, [paper]

  3. OctoCoder (StarCoder) / OctoGeeX (CodeGeeX2): "OctoPack: Instruction Tuning Code Large Language Models", 2023-08, arXiv, [paper] [repo]

  4. MFTCoder (Code LLaMA): "MFTCoder: Boosting Code LLMs with Multitask Fine-Tuning", 2023-11, arXiv, [paper] [repo]

2.5 Reinforcement Learning on Code

  1. CompCoder: "Compilable Neural Code Generation with Compiler Feedback", 2022-03, ACL 2022, [paper]

  2. CodeRL: "CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning", 2022-07, NeurIPS 2022, [paper] [repo]

  3. PPOCoder: "Execution-based Code Generation using Deep Reinforcement Learning", 2023-01, TMLR 2023, [paper] [repo]

  4. RLTF: "RLTF: Reinforcement Learning from Unit Test Feedback", 2023-07, arXiv, [paper] [repo]

3. When Coding Meets Reasoning

3.1 Coding for Reasoning

  1. PAL: "PAL: Program-aided Language Models", 2022-11, ICML 2023, [paper] [repo]

  2. PoT: "Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks", 2022-11, TMLR 2023, [paper] [repo]

  3. CoC: "Chain of Code: Reasoning with a Language Model-Augmented Code Emulator", 2023-12, arXiv, [paper]

3.2 Coding via Planning

  1. Self-collaboration: "Self-collaboration Code Generation via ChatGPT", 2023-04, arXiv, [paper]

  2. ChatDev: "Communicative Agents for Software Development", 2023-07, arXiv, [paper] [repo]

  3. MetaGPT: "MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework", 2023-08, arXiv, [paper] [repo]

4. Methods/Models for Downstream Tasks

For each task, the first column contains non-neural methods (e.g. n-gram, TF-IDF, and (occasionally) static program analysis); the second column contains non-Transformer neural methods (e.g. LSTM, CNN, GNN); the third column contains Transformer based methods (e.g. BERT, GPT, T5).

Compiler Optimization

  • "Large Language Models for Compiler Optimization", 2023-09, [paper]

  • "Refining Decompiled C Code with Large Language Models", 2023-10, [paper]

  • "Priority Sampling of Large Language Models for Compilers", 2024-02, [paper]

Frontend Development & Web Agents

  • "Seeking the user interface", 2014-09, ASE 2014, [paper]

  • "pix2code: Generating Code from a Graphical User Interface Screenshot", 2017-05, EICS 2018, [paper]

  • "Machine Learning-Based Prototyping of Graphical User Interfaces for Mobile Apps", 2018-02, TSE 2020, [paper]

  • "Automatic HTML Code Generation from Mock-Up Images Using Machine Learning Techniques", 2019-04, EBBT 2019, [paper]

  • "Sketch2code: Generating a website from a paper mockup", 2019-05, [paper]

  • "HTLM: Hyper-Text Pre-Training and Prompting of Language Models", 2021-07, ICLR 2022, [paper]

  • "MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding", 2021-10, ACL 2022, [paper]

  • "WebKE: Knowledge Extraction from Semi-structured Web with Pre-trained Markup Language Model", 2021-10, CIKM 2021, [paper]

  • "WebGPT: Browser-assisted question-answering with human feedback", 2021-12, [paper]

  • "CM3: A Causal Masked Multimodal Model of the Internet", 2022-01, [paper]

  • "DOM-LM: Learning Generalizable Representations for HTML Documents", 2022-01, [paper]

  • "WebFormer: The Web-page Transformer for Structure Information Extraction", 2022-02, WWW 2022, [paper]

  • "A Dataset for Interactive Vision-Language Navigation with Unknown Command Feasibility", 2022-02, ECCV 2022, [paper]

  • "WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents", 2022-07, NeurIPS 2022, [paper]

  • "Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding", 2022-10, ICML 2023, [paper]

  • "Understanding HTML with Large Language Models", 2022-10, EMNLP 2023 findings, [paper]

  • "WebUI: A Dataset for Enhancing Visual UI Understanding with Web Semantics", 2023-01, CHI 2023, [paper]

  • "Learning UI-to-Code Reverse Generator Using Visual Critic Without Rendering", 2023-05, [paper]

  • "Mind2Web: Towards a Generalist Agent for the Web", 2023-06, NeurIPS 2023, [paper]

  • "A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis", 2023-07, ICLR 2024, [paper]

  • "CogAgent: A Visual Language Model for GUI Agents", 2023-12, [paper]

  • "GPT-4V(ision) is a Generalist Web Agent, if Grounded", 2024-01, [paper]

  • "WebVoyager: Building an End-to-End Web Agent with Large Multimodal Models", 2024-01, [paper]

  • "WebLINX: Real-World Website Navigation with Multi-Turn Dialogue", 2024-02, [paper]

  • "OmniACT: A Dataset and Benchmark for Enabling Multimodal Generalist Autonomous Agents for Desktop and Web", 2024-02, [paper]

  • "Design2Code: How Far Are We From Automating Front-End Engineering?", 2024-03, [paper]

5. Datasets

5.1 Pretraining

  1. CodeSearchNet: "CodeSearchNet Challenge: Evaluating the State of Semantic Code Search", 2019-09, arXiv, [paper] [repo] [data]

  2. The Pile: "The Pile: An 800GB Dataset of Diverse Text for Language Modeling", 2020-12, arXiv, [paper] [data]

  3. CodeParrot, 2022-02, [data]

  4. The Stack: "The Stack: 3 TB of permissively licensed source code", 2022-11, arXiv, [paper] [data]

  5. ROOTS: "The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset", 2023-03, NeurIPS 2022 Datasets and Benchmarks Track, [paper] [data]

  6. The Stack v2: "StarCoder 2 and The Stack v2: The Next Generation", 2024-02, arXiv, [paper] [data]

5.2 Benchmarks

  1. CodeXGLUE: "CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation", 2021-02, NeurIPS Datasets and Benchmarks 2021, [paper] [repo] [data]

Program Synthesis

Date Venue Benchmark Size Language Source
2018-02 LREC 2018 NL2Bash 9305 Bash "NL2Bash: A Corpus and Semantic Parser for Natural Language Interface to the Linux Operating System" [paper] [data]
2018-08 EMNLP 2018 CONCODE 104K Java "Mapping Language to Code in Programmatic Context" [paper] [data]
2019-10 EMNLP-IJCNLP 2019 JuICe 1.5M/3725 * Python "JuICe: A Large Scale Distantly Supervised Dataset for Open Domain Context-based Code Generation" [paper] [data]
2021-05 NeurIPS 2021 APPS 10000 Python "Measuring Coding Challenge Competence With APPS" [paper] [data]
2021-07 arXiv HumanEval 164 Python "Evaluating Large Language Models Trained on Code" [paper] [data]
2021-08 arXiv MBPP/MathQA-Python 974/23914 Python "Program Synthesis with Large Language Models" [paper] [MBPP] [MathQA-Python]
2021-08 ACL/IJCNLP 2021 PlotCoder 40797 Python "PlotCoder: Hierarchical Decoding for Synthesizing Visualization Code in Programmatic Context" [paper] [data]
2022-01 arXiv DSP 1119 Python "Training and Evaluating a Jupyter Notebook Data Science Assistant" [paper] [data]
2022-02 Science CodeContests 13610 C++, Python, Java "Competition-Level Code Generation with AlphaCode" [paper] [data]
2022-03 EACL 2023 Findings MCoNaLa 896 Python "MCoNaLa: A Benchmark for Code Generation from Multiple Natural Languages" [paper] [data]
2022-06 arXiv AixBench 336 Java "AixBench: A Code Generation Benchmark Dataset" [paper] [data]
2022-08 IEEE Trans. Software Engineering MultiPL-E "MultiPL-E: A Scalable and Extensible Approach to Benchmarking Neural Code Generation", [paper] [data]
2022-10 ICLR 2023 MBXP 12.4K Python, Java, JS, TypeScript, Go, C#, PHP, Ruby, Kotlin, C++, Perl, Scala, Swift "Multi-lingual Evaluation of Code Generation Models" [paper] [data]
2022-10 ICLR 2023 Multilingual HumanEval 1.9K Python, Java, JS, TypeScript, Go, C#, PHP, Ruby, Kotlin, Perl, Scala, Swift "Multi-lingual Evaluation of Code Generation Models" [paper] [data]
2022-10 ICLR 2023 MathQA-X 5.6K Python, Java, JS "Multi-lingual Evaluation of Code Generation Models" [paper] [data]
2022-11 arXiv ExeDS 534 Python "Execution-based Evaluation for Data Science Code Generation Models" [paper] [data]
2022-11 arXiv DS-1000 1000 Python "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation" [paper] [data]
2022-12 arXiv ODEX 945 Python "Execution-Based Evaluation for Open-Domain Code Generation" [paper] [data]
2023-02 arXiv CoderEval 460 Python, Java "CoderEval: A Benchmark of Pragmatic Code Generation with Generative Pre-trained Models" [paper] [data]
2023-03 arXiv xCodeEval 5.5M C, C#, C++, Go, Java, JS, Kotlin, PHP, Python, Ruby, Rust "xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval" [paper] [data]
2023-03 arXiv HumanEval-X 820 Python, C++, Java, JS, Go "CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Evaluations on HumanEval-X" [paper] [data]
2023-05 arXiv HumanEval+ 164 Python "Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation" [paper] [data]
2023-06 arXiv StudentEval 1749 $^\dagger$ Python "StudentEval: A Benchmark of Student-Written Prompts for Large Language Models of Code" [paper] [data]
2023-08 arXiv HumanEvalPack 984 Python, JS, Go, Java, C++, Rust "OctoPack: Instruction Tuning Code Large Language Models" [paper] [data]
2023-06 NeurIPS 2023 DotPrompts 10538 $^\ddagger$ Java "Guiding Language Models of Code with Global Context using Monitors" [paper] [data]
2023-09 arXiv CodeApex 476 C++ "CodeApex: A Bilingual Programming Evaluation Benchmark for Large Language Models" [paper] [data]
2023-09 arXiv VerilogEval 8645/156 $^\diamond$ Verilog "VerilogEval: Evaluating Large Language Models for Verilog Code Generation" [paper] [data]
2023-11 arXiv ML-Bench 10040 Bash "ML-Bench: Large Language Models Leverage Open-source Libraries for Machine Learning Tasks" [paper] [data]

* Automatically mined/human-annotated

$^\dagger$ 1749 prompts for 48 problems

$^\ddagger$ 10538 prompts for 1420 problems

$^\diamond$ Machine/human prompts

Text-to-SQL

  • "Deep learning driven natural languages text to SQL query conversion: A survey", 2022-08, arXiv, [paper]
  • "Recent Advances in Text-to-SQL: A Survey of What We Have and What We Expect", 2022-08, COLING 2022, [paper]
  • "A Survey on Text-to-SQL Parsing: Concepts, Methods, and Future Directions", 2022-08, arXiv, [paper]
  • "A survey on deep learning approaches for text-to-SQL", 2023-01, VLDB J., [paper]
Date Venue Benchmark Size Language Source
2017-08 arXiv WikiSQL 80654 "Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning" [paper] [data]
2018-06 CL 2018 Advising 4570 "Improving Text-to-SQL Evaluation Methodology" [paper] [data]
2018-09 EMNLP 2018 Spider 10181 "Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL Task" [paper] [data]
2019-06 ACL 2019 SParC 12726 "SParC: Cross-Domain Semantic Parsing in Context" [paper] [data]
2019-07 WWW 2020 MIMICSQL 10000 "Text-to-SQL Generation for Question Answering on Electronic Medical Records" [paper] [data]
2019-09 EMNLP-IJCNLP 2019 CoSQL 15598 "CoSQL: A Conversational Text-to-SQL Challenge Towards Cross-Domain Natural Language Interfaces to Databases" [paper] [data]
2020-05 LREC 2020 Criteria-to-SQL 2003 "Dataset and Enhanced Model for Eligibility Criteria-to-SQL Semantic Parsing" [paper] [data]
2020-10 EMNLP 2020 Findings Squall 11276 "On the Potential of Lexico-logical Alignments for Semantic Parsing to SQL Queries" [paper] [data]
2020-10 NAACL-HLT 2021 Spider-Realistic 508 "Structure-Grounded Pretraining for Text-to-SQL" [paper] [data]
2021-06 ACL/IJCNLP 2021 Spider-Syn 8034 "Towards Robustness of Text-to-SQL Models against Synonym Substitution" [paper] [data]
2021-06 NLP4Prog 2021 SEDE 12023 "Text-to-SQL in the Wild: A Naturally-Occurring Dataset Based on Stack Exchange Data" [paper] [data]
2021-06 ACL/IJCNLP 2021 KaggleDBQA 400 "KaggleDBQA: Realistic Evaluation of Text-to-SQL Parsers" [paper] [data]
2021-09 EMNLP Spider-DK 535 "Exploring Underexplored Limitations of Cross-Domain Text-to-SQL Generalization" [paper] [data]
2022-05 NAACL 2022 Findings Spider-SS/CG 8034/45599 "Measuring and Improving Compositional Generalization in Text-to-SQL via Component Alignment" [paper] [data]
2023-05 arXiv BIRD 12751 "Can LLM Already Serve as A Database Interface? A BIg Bench for Large-Scale Database Grounded Text-to-SQLs" [paper] [data]

Code Translation

Date Venue Benchmark Size Language Source
2020-06 NeurIPS 2020 Transcoder GeeksforGeeks 1.4K C++, Java, Python "Unsupervised Translation of Programming Languages" [paper] [data]
2021-02 NeurIPS Datasets and Benchmarks 2021 CodeTrans 11.8K Java, C# "CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation" [paper] [data]
2021-08 ACL 2023 Findings Avatar 9515 Java, Python "AVATAR: A Parallel Corpus for Java-Python Program Translation" [paper] [data]
2022-06 AAAI 2022 CoST 132K C++, Java, Python, C#, JS, PHP, C "Multilingual Code Snippets Training for Program Translation" [paper] [data]
2022-06 arXiv XLCoST 567K C++, Java, Python, C#, JS, PHP, C "XLCoST: A Benchmark Dataset for Cross-lingual Code Intelligence" [paper] [data]
2023-03 arXiv xCodeEval 5.6M C, C#, C++, Go, Java, JS, Kotlin, PHP, Python, Ruby, Rust "xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval" [paper] [data]
2023-03 arXiv HumanEval-X 1640 Python, C++, Java, JS, Go "CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Evaluations on HumanEval-X" [paper] [data]
2023-08 arXiv G-TransEval 4000 C++, Java, C#, JS, Python "On the Evaluation of Neural Code Translation: Taxonomy and Benchmark" [paper] [data]
2023-10 arXiv CodeTransOcean 270.5K 45 "CodeTransOcean: A Comprehensive Multilingual Benchmark for Code Translation" [paper] [data]

Program Repair

  • "Neural Program Repair: Systems, Challenges and Solutions", 2022-02, Internetware 2022, [paper]
  • "A Survey of Learning-based Automated Program Repair", 2023-01, arXiv, [paper]
  • "A Survey on Automated Program Repair Techniques", 2023-03, arXiv, [paper]
Date Venue Benchmark Size Language Source
2014-07 ISSTA 2014 Defects4J 357 Java "Defects4J: A Database of Existing Faults to Enable Controlled Testing Studies for Java Programs" [paper] [data]
2015-12 IEEE Trans. Software Engineering ManyBugs/IntroClass 185/998 C "The ManyBugs and IntroClass Benchmarks for Automated Repair of C Programs" [paper] [data]
2016-11 FSE 2016 BugAID 105K JS "Discovering Bug Patterns in JavaScript" [paper] [data]
2017-02 AAAI 2017 DeepFix 6971 C "DeepFix: Fixing Common C Language Errors by Deep Learning" [paper] [data]
2017-05 ICSE-C 2017 Codeflaws 3902 C "DeepFix: Fixing Common C Language Errors by Deep Learning" [paper] [data]
2017-10 SPLASH 2017 QuixBugs 80 Java, Python "QuixBugs: a multi-lingual program repair benchmark set based on the quixey challenge" [paper] [data]
2018-05 MSR 2018 Bugs.jar 1158 Java "Bugs.jar: a large-scale, diverse dataset of real-world Java bugs" [paper] [data]
2018-12 ACM Trans. Softw. Eng. Methodol. BFP 124K Java "An Empirical Study on Learning Bug-Fixing Patches in the Wild via Neural Machine Translation" [paper] [data]
2019-01 SANER 2019 Bears 251 Java "Bears: An Extensible Java Bug Benchmark for Automatic Program Repair Studies" [paper] [data]
2019-01 ICSE 2019 unnamed 21.8K * Java "On Learning Meaningful Code Changes via Neural Machine Translation" [paper] [data]
2019-04 ICST 2019 BugsJS 453 JS "BugsJS: a Benchmark of JavaScript Bugs" [paper] [data]
2019-05 ICSE 2019 BugSwarm 1827/1264 Java/Python "BugSwarm: mining and continuously growing a dataset of reproducible failures and fixes" [paper] [data]
2019-05 ICSE 2019 CPatMiner 17K * Java "Graph-based mining of in-the-wild, fine-grained, semantic code change patterns" [paper] [data]
2019-05 MSR 2020 ManySStuBs4J 154K Java "How Often Do Single-Statement Bugs Occur? The ManySStuBs4J Dataset" [paper] [data]
2019-11 ASE 2019 Refactory 1783 Python "Re-factoring based program repair applied to programming assignments" [paper] [data]
2020-07 ISSTA 2020 CoCoNut 24M Java, Python, C, JS "CoCoNuT: combining context-aware neural translation models using ensemble for program repair" [paper] [data]
2020-10 Inf. Softw. Technol. Review4Repair 58021 Java "Review4Repair: Code Review Aided Automatic Program Repairing" [paper] [data]
2020-11 ESEC/FSE 2020 BugsInPy 493 Python "BugsInPy: A Database of Existing Bugs in Python Programs to Enable Controlled Testing and Debugging Studies" [paper] [data]
2021-07 ICML 2021 TFix 105K JS "TFix: Learning to Fix Coding Errors with a Text-to-Text Transformer" [paper] [data]
2021-08 arXiv Megadiff 663K * Java "Megadiff: A Dataset of 600k Java Source Code Changes Categorized by Diff Size" [paper] [data]
2022-01 SSB/TSSB MSR 2022 9M/3M Python "TSSB-3M: Mining single statement bugs at massive scale" [paper] [data]
2022-10 MSR 2022 FixJS 324K JS "FixJS: a dataset of bug-fixing JavaScript commits" [paper] [data]
2022-11 ESEC/FSE 2022 TypeBugs 93 Python "PyTER: Effective Program Repair for Python Type Errors" [paper] [data]
2023-03 arXiv xCodeEval 4.7M C, C#, C++, Go, Java, JS, Kotlin, PHP, Python, Ruby, Rust "xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval" [paper] [data]
2023-04 arXiv RunBugRun 450K C, C++, Java, Python, JS, Ruby, Go, PHP "RunBugRun -- An Executable Dataset for Automated Program Repair" [paper] [data]
2023-08 arXiv HumanEvalPack 984 Python, JS, Go, Java, C++, Rust "OctoPack: Instruction Tuning Code Large Language Models" [paper] [data]

* These are code-change datasest, and only a subset therein concerns bug fixing.

Code Summarization

  • "A Survey of Automatic Source Code Summarization", 2022-02, Symmetry, [paper]
Date Venue Benchmark Size Language Source
2016-08 ACL 2016 CODE-NN 66K/32K C#/SQL "Summarizing Source Code using a Neural Attention Model" [paper] [data]
2017-07 IJCNLP 2017 unnamed 150K Python "A parallel corpus of Python functions and documentation strings for automated code documentation and code generation" [paper] [data]
2018-05 ICPC 2018 DeepCom 588K Java "Deep code comment generation" [paper] [data]
2018-07 IJCAI 2018 TL-CodeSum 411K Java "Summarizing Source Code with Transferred API Knowledge" [paper] [data]
2018-11 ASE 2018 unnamed 109K Python "Improving Automatic Source Code Summarization via Deep Reinforcement Learning" [paper] [data]
2019-09 arxiv CodeSearchNet 2.3M Go, JS, Python, PHP, Java, Ruby "CodeSearchNet Challenge: Evaluating the State of Semantic Code Search" [paper] [data]
2023-08 arXiv HumanEvalPack 984 Python, JS, Go, Java, C++, Rust "OctoPack: Instruction Tuning Code Large Language Models" [paper] [data]

Defect/Vulnerability Detection

  • "Benchmarking Software Vulnerability Detection Techniques: A Survey", 2023-03, arXiv, [paper]
Date Venue Benchmark Size Language Source
2018-01 NDSS 2018 CGD 62K C, C++ "VulDeePecker: A Deep Learning-Based System for Vulnerability Detection" [paper] [data]
2018-04 IEEE Trans. Ind. Informatics unnamed 32988 C, C++ "Cross-Project Transfer Representation Learning for Vulnerable Function Discovery" [paper] [data]
2018-07 ICMLA 2018 Draper VDISC 12.8M C, C++ "Automated Vulnerability Detection in Source Code Using Deep Representation Learning" [paper] [data]
2018-07 IEEE TDSC SySeVR 15591 C, C++ "SySeVR: A Framework for Using Deep Learning to Detect Software Vulnerabilities" [paper] [data]
2019-02 MSR 2019 unnamed 624 Java "A Manually-Curated Dataset of Fixes to Vulnerabilities of Open-Source Software" [paper] [data]
2019-09 NeurIPS 2019 Devign 49K C "Devign: Effective Vulnerability Identification by Learning Comprehensive Program Semantics via Graph Neural Networks" [paper] [data]
2019-11 IEEE TDSC unnamed 170K C, C++ "Software Vulnerability Discovery via Learning Multi-Domain Knowledge Bases" [paper] [data]
2019-12 ICLR 2020 GREAT 2.8M Python "Global Relational Models of Source Code" [paper] [data]
2020-01 IEEE TDSC MVD 182K C, C++ "μVulDeePecker: A Deep Learning-Based System for Multiclass Vulnerability Detection" [paper] [data]
2020-02 ICICS 2019 unnamed 1471 C "Deep Learning-Based Vulnerable Function Detection: A Benchmark" [paper] [data]
2020-09 IEEE Trans. Software Eng. ReVeal 18K C "Deep Learning based Vulnerability Detection: Are We There Yet?" [paper] [data]
2020-09 MSR 2020 Big-Vul 265K C, C++ "A C/C++ Code Vulnerability Dataset with Code Changes and CVE Summaries" [paper] [data]
2021-02 ICSE (SEIP) 2021 D2A 1.3M C, C++ "D2A: A Dataset Built for AI-Based Vulnerability Detection Methods Using Differential Analysis" [paper] [data]
2021-05 NeurIPS 2021 PyPIBugs 2374 Python "Self-Supervised Bug Detection and Repair" [paper] [data]
2021-07 In PROMISE 2021 CVEfixes 5495 27 "CVEfixes: Automated Collection of Vulnerabilities and Their Fixes from Open-Source Software" [paper] [data]
2021-08 ESEC/FSE 2021 CrossVul 27476 40+ "CrossVul: a cross-language vulnerability dataset with commit data" [paper] [data]
2023-04 RAID 2023 DiverseVul 349K C, C++ "DiverseVul: A New Vulnerable Source Code Dataset for Deep Learning Based Vulnerability Detection" [paper] [data]
2023-06 arXiv VulnPatchPairs 26K C "Limits of Machine Learning for Automatic Vulnerability Detection" [paper] [data]
2023-11 arXiv VulBench 455 C "How Far Have We Gone in Vulnerability Detection Using Large Language Models" [paper] [data]

Code Retrieval

  • "Code Search: A Survey of Techniques for Finding Code", 2022-04, ICSME 2021, [[paper](ACM Comput. Surv)]
  • "A Survey of Deep Code Search", 2023-05, arXiv, [paper]
Date Venue Benchmark Size Language Source
2018-03 WWW 2018 StaQC 148K/120K Python/SQL "StaQC: A Systematically Mined Question-Code Dataset from Stack Overflow" [paper] [data]
2018-05 ICSE 2018 DeepCS 16.2M Java "Deep Code Search" [paper] [data]
2018-05 MSR 2018 CoNaLa 600K/2.9K Python "Learning to Mine Aligned Code and Natural Language Pairs from Stack Overflow" [paper] [data]
2019-08 arXiv unnamed 287 Java "Neural Code Search Evaluation Dataset" [paper] [data]
2019-09 arXiv CodeSearchNet 2.3M/99 Go, PHP, JS, Python, Java, Ruby "CodeSearchNet Challenge: Evaluating the State of Semantic Code Search" [paper] [data]
2020-02 SANER 2020 CosBench 52 Java "Are the Code Snippets What We Are Searching for? A Benchmark and an Empirical Study on Code Search with Natural-Language Queries" [paper] [data]
2020-08 arXiv SO-DS 2.2K Python "Neural Code Search Revisited: Enhancing Code Snippet Retrieval through Natural Language Intent" [paper] [data]
2020-10 ACM Trans. Knowl. Discov. Data FB-Java 249K Java "Deep Graph Matching and Searching for Semantic Code Retrieval" [paper] [data]
2021-02 NeurIPS Datasets and Benchmarks 2021 AdvTest/WebQueryTest 280K/1K Python "CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation" [paper] [[data]]
2021-05 ACL/IJCNLP 2021 CoSQA 21K Python "CoSQA: 20,000+ Web Queries for Code Search and Question Answering" [paper] [data]

Type Inference

Date Venue Benchmark Size Language Source
2019-12 ESEC/FSE 2020 TypeWriter OSS 208K Python "TypeWriter: Neural Type Prediction with Search-based Validation" [paper] [data]
2020-04 PLDI 2020 Typilus 252K Python "Typilus: Neural Type Hints" [paper] [data]
2020-04 ICLR 2020 LambdaNet 300 * TypeScript "LambdaNet: Probabilistic Type Inference using Graph Neural Networks" [paper] [data]
2021-04 MSR 2021 ManyTypes4Py 869K Python "ManyTypes4Py: A Benchmark Python Dataset for Machine Learning-based Type Inference" [paper] [data]
2022-10 MSR 2022 ManyTypes4TypeScript 9.1M TypeScript "ManyTypes4TypeScript: a comprehensive TypeScript dataset for sequence-based type inference" [paper] [data]
2023-02 ECOOP 2023 TypeWeaver 513 * TypeScript "Do Machine Learning Models Produce TypeScript Types That Type Check?" [paper] [data]
2023-03 ICLR 2023 BetterTypes4Py/InferTypes4Py 608K/4.6K Python "TypeT5: Seq2seq Type Inference using Static Analysis" [paper] [data]
2023-05 arXiv OpenTau 744 * TypeScript "Type Prediction With Program Decomposition and Fill-in-the-Type Training" [paper] [data]

* These are project counts.

Commit Message Generation

  • "On the Evaluation of Commit Message Generation Models: An Experimental Study", 2021-07, ICSME 2021, [paper]
Date Venue Benchmark Size Language Source
2017-03 ICPC 2017 unnamed 509K Java "Towards Automatic Generation of Short Summaries of Commits" [paper] [data]
2017-04 ACL 2017 CommitGen 153K Python, JS, C++, Java "A Neural Architecture for Generating Natural Language Descriptions from Source Code Changes" [paper] [data]
2017-08 ASE 2017 CommitGen 32K/75K * Java "Automatically Generating Commit Messages from Diffs using Neural Machine Translation" [paper] [data]
2018-09 ASE 2018 NNGen 27K Java "Neural-machine-translation-based commit message generation: how far are we?" [paper] [data]
2019-05 MSR 2019 PtrGNCMsg 64.9K Java "Generating commit messages from diffs using pointer-generator network" [paper] [[data(https://zenodo.org/records/2593787)]]
2019-08 IJCAI 2019 CoDiSum 90.7K Java "Commit message generation for source code changes" [paper] [data]
2019-12 IEEE Trans. Software Eng. ATOM 160K Java "ATOM: Commit Message Generation Based on Abstract Syntax Tree and Hybrid Ranking" [paper] [data]
2021-05 arXiv CommitBERT 346K Python, PHP, Go, Java, JS, Ruby "CommitBERT: Commit Message Generation Using Pre-Trained Programming Language Model" [paper] [data]
2021-07 ICSME 2021 MCMD 2.25M Java, C#, C++, Python, JS "On the Evaluation of Commit Message Generation Models: An Experimental Study" [paper] [data]
2021-07 ACM Trans. Softw. Eng. Methodol. CoRec 107K Java "Context-aware Retrieval-based Deep Commit Message Generation" [paper] [data]
2023-07 ASE 2023 ExGroFi 19263 Java "Delving into Commit-Issue Correlation to Enhance Commit Message Generation Models" [paper] [data]
2023-08 ASE 2023 CommitChronicle 10.7M 20 "From Commit Message Generation to History-Aware Commit Message Completion" [paper] [data]

* with/without verb-direct object filter

Repo-Level Coding

Date Venue Benchmark Size Language Source
2023-03 arXiv RepoEval 1600/1600/373 * Python "RepoCoder: Repository-Level Code Completion Through Iterative Retrieval and Generation" [paper] [data]
2023-06 arXiv RepoBench 890K/9M/43K $^\dagger$ Python, Java "RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems" [paper] [data]
2023-06 NeurIPS 2023 PragmaticCode 880 ** Java "Guiding Language Models of Code with Global Context using Monitors" [paper] [data]
2023-06 arXiv Stack-Repo 816K Java "RepoFusion: Training Code Models to Understand Your Repository" [paper] [data]
2023-09 arXiv CodePlan 645/21 $^\ddagger$ C#/Python $^\ddagger$ "CodePlan: Repository-level Coding using LLMs and Planning" [paper] [data] **
2023-10 arXiv SWE-Bench 2294 Python "SWE-bench: Can Language Models Resolve Real-World GitHub Issues?" [paper] [data]
2023-10 arXiv CrossCodeEval 9928 Python, Java, TypeScript, C# "CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion" [paper] [data]

*Line Completion/API Invocation Completion/Function Completion

$^\dagger$ Retrieval/Completion/Pipeline

** File count

$^\ddagger$ Migration/Temporal Edit

** This is the link given in the paper, but we are unable to access it at the time of writing.

Other tasks are coming soon!

6. Recommended Readings

30 papers as a primer on LLM.

Date Keyword Paper TL;DR
2014-09 Attention Neural Machine Translation by Jointly Learning to Align and Translate The original attention, proposed for encoder-decoder RNN
2015-08 BPE Neural Machine Translation of Rare Words with Subword Units Byte-pair encoding: split rare words into subword units
2017-06 Transformer Attention Is All You Need Replace LSTM with self-attention for long-range dependency and parallel training
2017-10 Mixed Precision Training Mixed Precision Training Store model weights in fp16 to save memory
2018-04 GLUE GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding A language understanding benchmark
2018-06 GPT Improving Language Understanding by Generative Pre-Training Pretraining-finetuning paradigm applied to Transformer decoder
2018-10 BERT BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding Masked Language Modeling (MLM) applied to Transformer encoder for pretraining
2019-02 GPT-2 Language Models are Unsupervised Multitask Learners GPT made larger (1.5B). They found language models implicitly learn about downstream tasks (such as translation) during pretraining.
2019-05 SuperGLUE SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems Another language understanding benchmark
2019-07 RoBERTa RoBERTa: A Robustly Optimized BERT Pretraining Approach An optimized BERT
2019-09 Megatron-LM Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism Model parallelism
2019-10 ZeRO ZeRO: Memory Optimizations Toward Training Trillion Parameter Models Memory-efficient distributed optimization
2019-10 T5 Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer Transformer encoder-decoder pretrained with an MLM-like denoising objective
2020-05 GPT-3 Language Models are Few-Shot Learners By training an even larger version of GPT-2 (175B), they discovered a new learning paradigm: In-Context Learning (ICL)
2020-09 MMLU Measuring Massive Multitask Language Understanding A world-knowledge and complex reasoning benchmark
2020-12 Pile The Pile: An 800GB Dataset of Diverse Text for Language Modeling A diverse pretraining dataset
2021-06 LoRA LoRA: Low-Rank Adaptation of Large Language Models Memory-efficient finetuning
2021-09 FLAN Finetuned Language Models Are Zero-Shot Learners Instruction-finetuning
2021-10 T0 Multitask Prompted Training Enables Zero-Shot Task Generalization Also instruction finetuning, but applied to the much smaller T5
2021-12 Gopher Scaling Language Models: Methods, Analysis & Insights from Training Gopher A 280B LLM with comprehensive experiments
2022-01 CoT Chain-of-Thought Prompting Elicits Reasoning in Large Language Models Chain-of-Though reasoning
2022-03 InstructGPT Training language models to follow instructions with human feedback GPT-3 instruction finetuned with RLHF (reinforcement learning from human feedback)
2022-03 Chinchilla Training Compute-Optimal Large Language Models A smaller (70B) version of Gopher that's pretrained on more data
2022-04 PaLM PaLM: Scaling Language Modeling with Pathways The largest dense model ever (540B)
2022-05 0-shot CoT Large Language Models are Zero-Shot Reasoners Tell LLMs to think step by step, and they can actually do it
2022-06 BIG Bench Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models Another world-knowledge and complex reasoning benchmark
2022-06 Emergent Ability Emergent Abilities of Large Language Models A review on emergent abilities
2022-10 Flan Scaling Instruction-Finetuned Language Models Consolidate all the existing instruction tuning datasets, and you get SOTA
2022-11 BLOOM BLOOM: A 176B-Parameter Open-Access Multilingual Language Model The largest open-source LLM, trained on 46 languages, with detailed discussion about training and evaluation
2022-12 Self-Instruct Self-Instruct: Aligning Language Models with Self-Generated Instructions Instruction tuning using LLM-generated data

This list aims to provide the essential background for understanding current LLM technologies, and thus excludes more recent models such as LLaMA, GPT-4 or PaLM 2. For comprehensive reviews on these more general topics, we refer to other sources such as this paper or these repositories: Awesome-LLM, Awesome AIGC Tutorials. And for specific domains: Awesome Domain LLM, Awesome Tool Learning, Awesome-LLM-MT.

Citation

If you find this repo or our survey helpful, please consider citing us:

@article{zhang2023unifying,
      title={Unifying the Perspectives of NLP and Software Engineering: A Survey on Language Models for Code},
      author={Ziyin Zhang and Chaoyu Chen and Bingchang Liu and Cong Liao and Zi Gong and Hang Yu and Jianguo Li and Rui Wang},
      year={2023},
      journal={CoRR},
      volume={abs/2311.07989},
      url={https://doi.org/10.48550/arXiv.2311.07989},
      doi={10.48550/ARXIV.2311.07989},
      eprint={2311.07989},
      eprinttype={arXiv},
}

Star History

Star History Chart