/prompt-in-context-learning

Awesome resources for in-context learning and prompt engineering: Mastery of the LLMs such as ChatGPT, GPT-3, and FlanT5, with up-to-date and cutting-edge updates.

Primary LanguageJupyter NotebookMIT LicenseMIT

Typing SVG

An Open-Source Engineering Guide for Prompt-in-context-learning from EgoAlpha Lab.

📝 Papers | ⚡️ Playground | 🛠 Prompt Engineering | 🌍 ChatGPT Prompt⛳ LLMs Usage Guide

version Awesome

⭐️ Shining ⭐️: This is fresh, daily-updated resources for in-context learning and prompt engineering. As Artificial General Intelligence (AGI) is approaching, let’s take action and become a super learner so as to position ourselves at the forefront of this exciting era and strive for personal and professional greatness.

The resources include:

🎉Papers🎉: The latest papers about In-Context Learning, Prompt Engineering, Agent, and Foundation Models.

🎉Playground🎉: Large language models(LLMs)that enable prompt experimentation.

🎉Prompt Engineering🎉: Prompt techniques for leveraging large language models.

🎉ChatGPT Prompt🎉: Prompt examples that can be applied in our work and daily lives.

🎉LLMs Usage Guide🎉: The method for quickly getting started with large language models by using LangChain.

In the future, there will likely be two types of people on Earth (perhaps even on Mars, but that's a question for Musk):

  • Those who enhance their abilities through the use of AIGC;
  • Those whose jobs are replaced by AI automation.

💎EgoAlpha: Hello! human👤, are you ready?

Table of Contents

📢 News

☄️ EgoAlpha releases the TrustGPT focuses on reasoning. Trust the GPT with the strongest reasoning abilities for authentic and reliable answers. You can click here or visit the Playgrounds directly to experience it。

👉 Complete history news 👈


📜 Papers

You can directly click on the title to jump to the corresponding PDF link location

Survey

Retrieval-Augmented Generation for Large Language Models: A Survey2023.12.18

Retrieval-augmented Generation to Improve Math Question-Answering: Trade-offs Between Groundedness and Human Preference2023.10.04

A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future2023.09.27

The Rise and Potential of Large Language Model Based Agents: A Survey2023.09.14

Textbooks Are All You Need II: phi-1.5 technical report2023.09.11

Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models2023.09.03

Point-Bind&Point-LLM: Aligning Point Cloud with Multi-modality for 3D Understanding, Generation, and Instruction Following2023.09.01

Large language models in medicine: the potentials and pitfalls2023.08.31

Large Graph Models: A Perspective2023.08.28

A Survey on Large Language Model based Autonomous Agents2023.08.22

👉Complete paper list 🔗 for "Survey"👈

Prompt Engineering

Prompt Design

A mathematical perspective on Transformers2023.12.17

LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment2023.12.15

LLM360: Towards Fully Transparent Open-Source LLMs2023.12.11

Control Risk for Potential Misuse of Artificial Intelligence in Science2023.12.11

WonderJourney: Going from Anywhere to Everywhere2023.12.06

LLaVA-Grounding: Grounded Visual Chat with Large Multimodal Models2023.12.05

ChatTwin: Toward Automated Digital Twin Generation for Data Center via Large Language Models2023.11.15

u-LLaVA: Unifying Multi-Modal Tasks via Large Language Model2023.11.09

TopicGPT: A Prompt-based Topic Modeling Framework2023.11.02

CodeFusion: A Pre-trained Diffusion Model for Code Generation2023.10.26

👉Complete paper list 🔗 for "Prompt Design"👈

Chain of Thought

A Logically Consistent Chain-of-Thought Approach for Stance Detection2023.12.26

Assessing the Impact of Prompting, Persona, and Chain of Thought Methods on ChatGPT's Arithmetic Capabilities2023.12.22

G-LLaVA: Solving Geometric Problem with Multi-Modal Large Language Model2023.12.18

ProCoT: Stimulating Critical Thinking and Writing of Students through Engagement with Large Language Models (LLMs)2023.12.15

Multi-modal Latent Space Learning for Chain-of-Thought Reasoning in Language Models2023.12.14

Control Risk for Potential Misuse of Artificial Intelligence in Science2023.12.11

Chain-of-Thought in Neural Code Generation: From and For Lightweight Language Models2023.12.09

Latent Skill Discovery for Chain-of-Thought Reasoning2023.12.07

Computation of the optimal error exponent function for fixed-length lossy source coding in discrete memoryless sources2023.12.06

WonderJourney: Going from Anywhere to Everywhere2023.12.06

👉Complete paper list 🔗 for "Chain of Thought"👈

In-context Learning

G-LLaVA: Solving Geometric Problem with Multi-Modal Large Language Model2023.12.18

A mathematical perspective on Transformers2023.12.17

Control Risk for Potential Misuse of Artificial Intelligence in Science2023.12.11

WonderJourney: Going from Anywhere to Everywhere2023.12.06

Minimizing Factual Inconsistency and Hallucination in Large Language Models2023.11.23

Igniting Language Intelligence: The Hitchhiker's Guide From Chain-of-Thought Reasoning to Language Agents2023.11.20

An Embodied Generalist Agent in 3D World2023.11.18

MedAgents: Large Language Models as Collaborators for Zero-shot Medical Reasoning2023.11.16

Towards Verifiable Text Generation with Symbolic References2023.11.15

Learning skillful medium-range global weather forecasting.2023.11.14

👉Complete paper list 🔗 for "In-context Learning"👈

Retrieval Augmented Generation

ARES: An Automated Evaluation Framework for Retrieval-Augmented Generation Systems2023.11.16

Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection2023.10.17

Benchmarking Large Language Models in Retrieval-Augmented Generation2023.09.04

Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering2023.07.31

Referral Augmentation for Zero-Shot Information Retrieval2023.05.24

LLMDet: A Large Language Models Detection Tool2023.05.24

KNN-LM Does Not Improve Open-ended Text Generation2023.05.24

Knowledge-Retrieval Task-Oriented Dialog Systems with Semi-Supervision2023.05.22

Sentence Representations via Gaussian Embedding2023.05.22

Retrieving Texts based on Abstract Descriptions2023.05.21

👉Complete paper list 🔗 for "Retrieval Augmented Generation"👈

Evaluation & Reliability

TouchStone: Evaluating Vision-Language Models by Language Models2023.08.31

Shepherd: A Critic for Language Model Generation2023.08.08

Self-consistency for open-ended generations2023.07.11

Jailbroken: How Does LLM Safety Training Fail?2023.07.05

Towards Measuring the Representation of Subjective Global Opinions in Language Models2023.06.28

On the Reliability of Watermarks for Large Language Models2023.06.07

SETI: Systematicity Evaluation of Textual Inference2023.05.24

From Words to Wires: Generating Functioning Electronic Devices from Natural Language Descriptions2023.05.24

Testing the General Deductive Reasoning Capacity of Large Language Models Using OOD Examples2023.05.24

EvEval: A Comprehensive Evaluation of Event Semantics for Large Language Models2023.05.24

👉Complete paper list 🔗 for "Evaluation & Reliability"👈

Agent

AppAgent: Multimodal Agents as Smartphone Users2023.12.21

WonderJourney: Going from Anywhere to Everywhere2023.12.06

Agent as Cerebrum, Controller as Cerebellum: Implementing an Embodied LMM-based Agent on Drones2023.11.25

GPT-4V in Wonderland: Large Multimodal Models for Zero-Shot Smartphone GUI Navigation2023.11.13

Lemur: Harmonizing Natural Language and Code for Language Agents2023.10.10

Agents: An Open-source Framework for Autonomous Language Agents2023.09.14

An Appraisal-Based Chain-Of-Emotion Architecture for Affective Language Model Game Agents2023.09.10

Cognitive Architectures for Language Agents2023.09.05

Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization2023.08.04

Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics2023.07.04

👉Complete paper list 🔗 for "Agent"👈

Multimodal Prompt

G-LLaVA: Solving Geometric Problem with Multi-Modal Large Language Model2023.12.18

LLaVA-Grounding: Grounded Visual Chat with Large Multimodal Models2023.12.05

Sequential Modeling Enables Scalable Learning for Large Vision Models2023.12.01

LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models2023.11.28

MeshGPT: Generating Triangle Meshes with Decoder-Only Transformers2023.11.27

An Embodied Generalist Agent in 3D World2023.11.18

Emu Video: Factorizing Text-to-Video Generation by Explicit Image Conditioning2023.11.17

MedAgents: Large Language Models as Collaborators for Zero-shot Medical Reasoning2023.11.16

Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video Understanding2023.11.14

EviPrompt: A Training-Free Evidential Prompt Generation Method for Segment Anything Model in Medical Images2023.11.10

👉Complete paper list 🔗 for "Multimodal Prompt"👈

Prompt Application

A mathematical perspective on Transformers2023.12.17

Mathematical discoveries from program search with large language models.2023.12.14

LLM360: Towards Fully Transparent Open-Source LLMs2023.12.11

From Text to Motion: Grounding GPT-4 in a Humanoid Robot"Alter3"2023.12.11

Control Risk for Potential Misuse of Artificial Intelligence in Science2023.12.11

Sequential Modeling Enables Scalable Learning for Large Vision Models2023.12.01

MeshGPT: Generating Triangle Meshes with Decoder-Only Transformers2023.11.27

Minimizing Factual Inconsistency and Hallucination in Large Language Models2023.11.23

Igniting Language Intelligence: The Hitchhiker's Guide From Chain-of-Thought Reasoning to Language Agents2023.11.20

An Embodied Generalist Agent in 3D World2023.11.18

👉Complete paper list 🔗 for "Prompt Application"👈

Foundation Models

Time is Encoded in the Weights of Finetuned Language Models2023.12.20

Photorealistic Video Generation with Diffusion Models2023.12.11

Mamba: Linear-Time Sequence Modeling with Selective State Spaces2023.12.01

Minimizing Factual Inconsistency and Hallucination in Large Language Models2023.11.23

White-Box Transformers via Sparse Rate Reduction: Compression Is All There Is?2023.11.22

Learning skillful medium-range global weather forecasting.2023.11.14

Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video Understanding2023.11.14

SpectralGPT: Spectral Foundation Model2023.11.13

Social Motion Prediction with Cognitive Hierarchies2023.11.08

Pre-training LLMs using human-like development data corpus2023.11.08

👉Complete paper list 🔗 for "Foundation Models"👈

👨‍💻 LLM Usage

Large language models (LLMs) are becoming a revolutionary technology that is shaping the development of our era. Developers can create applications that were previously only possible in our imaginations by building LLMs. However, using these LLMs often comes with certain technical barriers, and even at the introductory stage, people may be intimidated by cutting-edge technology: Do you have any questions like the following?

  • How can LLM be built using programming?
  • How can it be used and deployed in your own programs?

💡 If there was a tutorial that could be accessible to all audiences, not just computer science professionals, it would provide detailed and comprehensive guidance to quickly get started and operate in a short amount of time, ultimately achieving the goal of being able to use LLMs flexibly and creatively to build the programs they envision. And now, just for you: the most detailed and comprehensive Langchain beginner's guide, sourced from the official langchain website but with further adjustments to the content, accompanied by the most detailed and annotated code examples, teaching code lines by line and sentence by sentence to all audiences.

Click 👉here👈 to take a quick tour of getting started with LLM.

✉️ Contact

This repo is maintained by EgoAlpha Lab. Questions and discussions are welcome via helloegoalpha@gmail.com.

We are willing to engage in discussions with friends from the academic and industrial communities, and explore the latest developments in prompt engineering and in-context learning together.

🙏 Acknowledgements

Thanks to the PhD students from EgoAlpha Lab and other workers who participated in this repo. We will improve the project in the follow-up period and maintain this community well. We also would like to express our sincere gratitude to the authors of the relevant resources. Your efforts have broadened our horizons and enabled us to perceive a more wonderful world.