TongLi3701's Stars
geekan/MetaGPT
🌟 The Multi-Agent Framework: First AI Software Company, Towards Natural Language Programming
linexjlin/GPTs
leaked prompts of GPTs
microsoft/JARVIS
JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf
yoheinakajima/babyagi
ShishirPatil/gorilla
Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls)
ludwig-ai/ludwig
Low-code framework for building custom LLMs, neural networks, and other AI models
01-ai/Yi
A series of large language models trained from scratch by developers @01-ai
tyxsspa/AnyText
Official implementation code of the paper <AnyText: Multilingual Visual Text Generation And Editing>
hyunwoongko/transformer
Transformer: PyTorch Implementation of "Attention Is All You Need"
Paitesanshi/LLM-Agent-Survey
luban-agi/Awesome-Domain-LLM
收集和梳理垂直领域的开源模型、数据集及评测基准。
ysymyth/ReAct
[ICLR 2023] ReAct: Synergizing Reasoning and Acting in Language Models
deepseek-ai/DreamCraft3D
[ICLR 2024] Official implementation of DreamCraft3D: Hierarchical 3D Generation with Bootstrapped Diffusion Prior
deepseek-ai/DeepSeek-LLM
DeepSeek LLM: Let there be answers
zwq2018/Data-Copilot
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
intelligent-machine-learning/dlrover
DLRover: An Automatic Distributed Deep Learning System
SkyworkAI/Skywork
Skywork series models are pre-trained on 3.2TB of high-quality multilingual (mainly Chinese and English) and code data. We have open-sourced the model, training data, evaluation data, evaluation methods, etc. 天工系列模型在3.2TB高质量多语言和代码数据上进行预训练。我们开源了模型参数,训练数据,评估数据,评估方法。
vivo-ai-lab/BlueLM
BlueLM(蓝心大模型): Open large language models developed by vivo AI Lab
IEIT-Yuan/Yuan-2.0
Yuan 2.0 Large Language Model
neelsjain/NEFTune
Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning
liyucheng09/Selective_Context
Compress your input to ChatGPT or other LLMs, to let them process 2x more content and save 40% memory and GPU time.
XueyangFeng/LLM-Agent-Paper-Digest
papers related to LLM-agent that published on top conferences
WangRongsheng/Aurora
The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"
swj0419/detect-pretrain-code
This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu , Terra Blevins , Danqi Chen , Luke Zettlemoyer.
ArtificialZeng/Baichuan2-Explained
Baichuan2代码的逐行解析版本,适合小白
zhongwanjun/MemoryBank-SiliconFriend
Source code and demo for memory bank and SiliconFriend
AGI-Edgerunners/LLM-Continual-Learning-Papers
Must-read Papers on Large Language Model (LLM) Continual Learning
ZhihaoAIRobotic/MetaAgent
:robot: The next generation of Multi-Modal Multi-Agent platform. :space_invader: :unicorn: :crystal_ball:
liyucheng09/LatestEval
Latest Evaluation Toolkit (LatestEval). Assessing the language models with latest, uncontaminated materials.
fancyerii/LLM-Continual-Learning-Papers
Must-read Papers on Large Language Model (LLM) Continual Learning