Collection of LLM resources that can be used to build products you can "own" or to perform reproducible research. Please note there are Terms of Service around some of the weights and training data that should be investigated before commercialization.
Currently collecting information on Autonomous Agents and Edge LLMs, but will add new sections as the field evolves.
-
babyagi - Python script example of AI-powered task management system. Uses OpenAI and Pinecone APIs to create, prioritize, and execute tasks. (2023-04-06, Yohei Nakajima)
-
Auto-GPT - An experimental open-source attempt to make GPT-4 fully autonomous. (2023-04-06, Toran Bruce Richards)
-
JARVIS - JARVIS, a system to connect LLMs with ML community (2023-04-06, Microsoft)
-
Generative Agents: Interactive Simulacra of Human Behavior (2023-04-07, Stanford and Google)
-
Twitter List: Homebrew AGI Club (2023-04-06, @altryne]
-
LangChain: Custom Agents (2023-04-03, LangChain)
-
HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace (2023-04-02, Microsoft)
-
Introducing "🤖 Task-driven Autonomous Agent" (2023-03-29, @yoheinakajima)
-
A simple Python implementation of the ReAct pattern for LLMs (2023-03-17, Simon Willison)
-
ReAct: Synergizing Reasoning and Acting in Language Models (2023-03-10, Princeton & Google)
-
Emergent autonomous scientific research capabilities of large language models (2023-04-11, Daniil A. Boiko,1 Robert MacKnight, and Gabe Gomes - Carnegie Mellon University)
-
TurboPilot CoPilot clone that runs code completion 6B-LLM with CPU and 4GB of RAM. (2023-04-11, James Ravenscroft)
-
LMFlow An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. (2023-04-06, OptimalScale)
-
xturing - Build and control your own LLMs (2023-04-03, stochastic.ai)
-
GPTQ-for-LLaMA - 4 bits quantization of LLaMA using GPTQ (2023-04-01, qwopqwop200, Meta ToS)
-
GPT4All - LLM trained with ~800k GPT-3.5-Turbo Generations based on LLaMa. (2023-03-28, Nomic AI, OpenAI ToS)
-
Dolly - Large language model trained on the Databricks Machine Learning Platform (2023-03-24, Databricks Labs, Apache)
-
bloomz.cpp Inference of HuggingFace's BLOOM-like models in pure C/C++. (2023-03-16, Nouamane Tazi, MIT License)
-
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM (2023-03-16, Kevin Kwok, MIT License)
-
Stanford Alpaca - Code and documentation to train Stanford's Alpaca models, and generate the data. (2023-03-13, Stanford CRFM, Apache License, Non-Commercial Data, Meta/OpenAI ToS)
-
llama.cpp - Port of Facebook's LLaMA model in C/C++. (2023-03-10, Georgi Gerganov, MIT License)
-
ChatRWKV - ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source. (2023-01-09, PENG Bo, Apache License)
-
RWKV-LM - RNN with Transformer-level LLM performance. Combines best of RNN and transformer: fast inference, saves VRAM, fast training. (2022?, PENG Bo, Apache License)
-
Cerebras-GPT 7 Models (2023-03-28, Huggingface, Cerebras, Apache License)
-
Alpine Data Cleaned (2023-03-21, Gene Ruebsamen, Apache & OpenAI ToS)
-
Alpaca Dataset (2023-03-13, Huggingface, Tatsu-Lab, Meta ToS/OpenAI ToS)
-
Alpaca Model Search (Huggingface, Meta ToS/OpenAI ToS)
-
Summary of Curent Models (2023-04-11, Dr Alan D. Thompson, Google Sheet)
-
Running GPT4All On a Mac Using Python langchain in a Jupyter Notebook (2023-04-04, Tony Hirst, Blog Post)
-
Cerebras-GPT vs LLaMA AI Model Comparison (2023-03-29, LunaSec, Blog Post)
-
Cerebras-GPT: Family of Open, Compute-efficient, LLMs (2023-03-28, Cerebras, Blog Post)
-
Hello Dolly: Democratizing the magic of ChatGPT with open models (2023-03-24, databricks, Blog Post)
-
The Coming of Local LLMs (2023-03-23, Nick Arner, Blog Post)
-
The RWKV language model: An RNN with the advantages of a transformer (2023-03-23, Johan Sokrates Wind, Blog Post)
-
Bringing Whisper and LLaMA to the masses (2023-03-15, The Changelog & Georgi Gerganov, Podcast Episode)
-
Alpaca: A Strong, Replicable Instruction-Following Model (2023-03-13, Stanford CRFM, Project Homepage)
-
Large language models are having their Stable Diffusion moment (2023-03-10, Simon Willison, Blog Post)
-
Running LLaMA 7B and 13B on a 64GB M2 MacBook Pro with llama.cpp (2023-03-10, Simon Willison, Blog/Today I Learned)
-
Introducing LLaMA: A foundational, 65-billion-parameter large language model (2023-02-24, Meta AI, Meta ToS)