neneluo's Stars
hanxuhu/SeqIns
The repository of the project "Fine-tuning Large Language Models with Sequential Instructions", code base comes from open-instruct and LAVIS
philschmid/deep-learning-pytorch-huggingface
mandarjoshi90/triviaqa
Code for the TriviaQA reading comprehension dataset
pminervini/AutoSurveyGPT
Automatically literature survey/review with GPT! An intelligent research assistant leveraging GPT-3.5 /GPT-4 to find, analyze, and rank relevant academic papers from Google Scholar based on user-provided search queries and topics
tatsu-lab/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
yizhongw/self-instruct
Aligning pretrained language models with instruction data generated by themselves.
rasbt/LLMs-from-scratch
Implementing a ChatGPT-like LLM in PyTorch from scratch, step by step
ShishirPatil/gorilla
Gorilla: An API store for LLMs
meta-llama/llama
Inference code for Llama models
meta-llama/llama3
The official Meta Llama 3 GitHub site
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
lucidrains/toolformer-pytorch
Implementation of Toolformer, Language Models That Can Use Tools, by MetaAI
meta-llama/llama-recipes
Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment. Demo apps to showcase Meta Llama3 for WhatsApp & Messenger.
lupantech/chameleon-llm
Codes for "Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models".
ysymyth/ReAct
[ICLR 2023] ReAct: Synergizing Reasoning and Acting in Language Models
darrenburns/elia
A snappy, keyboard-centric terminal user interface for interacting with large language models. Chat with ChatGPT, Claude, Llama 3, Phi 3, Mistral, Gemma and more.
zorazrw/awesome-tool-llm
night-chen/ToolQA
ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels (easy/hard) across eight real-life scenarios.
MMMU-Benchmark/MMMU
This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI"
OpenDevin/OpenDevin
š OpenDevin: Code Less, Make More
stanfordnlp/dspy
DSPy: The framework for programmingānot promptingāfoundation models
openai/grade-school-math
thunlp/ToolLearningPapers
reasoning-machines/pal
PaL: Program-Aided Language Models (ICML 2023)
lm-sys/FastChat
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
microsoft/LoRA
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
ctlllll/LLM-ToolMaker
huggingface/trl
Train transformer language models with reinforcement learning.
huggingface/peft
š¤ PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
AkariAsai/self-rag
This includes the original implementation of SELF-RAG: Learning to Retrieve, Generate and Critique through self-reflection by Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi.