HuiYSir's Stars
zjunlp/WKM
[NeurIPS 2024] Agent Planning with World Knowledge Model
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
zhentingqi/rStar
ysymyth/ReAct
[ICLR 2023] ReAct: Synergizing Reasoning and Acting in Language Models
1989Ryan/llm-mcts
[NeurIPS 2023] We use large language models as commonsense world model and heuristic policy within Monte-Carlo Tree Search, enabling better-reasoned decision-making for daily task planning problems.
microsoft/Everything-of-Thoughts-XoT
An implemtation of Everyting of Thoughts (XoT).
meta-llama/llama3
The official Meta Llama 3 GitHub site
LlamaFamily/Llama-Chinese
Llama中文社区,Llama3在线体验和微调模型已开放,实时汇总最新Llama3学习资料,已将所有代码更新适配Llama3,构建最好的中文Llama大模型,完全开源可商用
maitrix-org/llm-reasoners
A library for advanced large language model reasoning
mlfoundations/open_flamingo
An open-source framework for training large multimodal models.
alfworld/alfworld
ALFWorld: Aligning Text and Embodied Environments for Interactive Learning
xavierpuigf/virtualhome
API to run VirtualHome, a Multi-Agent Household Simulator
szxiangjn/world-model-for-language-model
karthikv792/LLMs-Planning
An extensible benchmark for evaluating large language models on planning
anuragajay/hip
Codebase for HiP
askforalfred/alfred
ALFRED - A Benchmark for Interpreting Grounded Instructions for Everyday Tasks
WooooDyy/LLM-Agent-Paper-List
The paper list of the 86-page paper "The Rise and Potential of Large Language Model Based Agents: A Survey" by Zhiheng Xi et al.
jackyzengl/GRID
AGI-Edgerunners/LLM-Planning-Papers
Must-read Papers on Large Language Model (LLM) Planning.
vmicheli/lm-butlers
allenai/embodied-clip
Official codebase for EmbCLIP
OpenGVLab/Instruct2Act
Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Model
RUCAIBox/LLMSurvey
The official GitHub page for the survey paper "A Survey of Large Language Models".
xlang-ai/instructor-embedding
[ACL 2023] One Embedder, Any Task: Instruction-Finetuned Text Embeddings
Instruction-Tuning-with-GPT-4/GPT-4-LLM
Instruction Tuning with GPT-4
Lightning-AI/lit-llama
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
facebookresearch/Detic
Code release for "Detecting Twenty-thousand Classes using Image-level Supervision".
microsoft/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Gary3410/TaPA
[arXiv 2023] Embodied Task Planning with Large Language Models
tatsu-lab/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.