tonyw's Stars
Significant-Gravitas/AutoGPT
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
langchain-ai/langchain
🦜🔗 Build context-aware reasoning applications
geekan/MetaGPT
🌟 The Multi-Agent Framework: First AI Software Company, Towards Natural Language Programming
oobabooga/text-generation-webui
A Gradio web UI for Large Language Models.
lm-sys/FastChat
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
suno-ai/bark
🔊 Text-Prompted Generative Audio Model
microsoft/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
google-research/tuning_playbook
A playbook for systematically maximizing the performance of deep learning models.
svc-develop-team/so-vits-svc
SoftVC VITS Singing Voice Conversion
ymcui/Chinese-LLaMA-Alpaca
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
GaiZhenbiao/ChuanhuChatGPT
GUI for ChatGPT API and many LLMs. Supports agents, file-based QA, GPT finetuning and query with web search. All with a neat UI.
Dao-AILab/flash-attention
Fast and memory-efficient exact attention
nlpxucan/WizardLM
LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath
modelscope/facechain
FaceChain is a deep-learning toolchain for generating your Digital-Twin.
SJTU-IPADS/PowerInfer
High-speed Large Language Model Serving on PCs with Consumer-grade GPUs
EleutherAI/lm-evaluation-harness
A framework for few-shot evaluation of language models.
Plachtaa/VITS-fast-fine-tuning
This repo is a pipeline of VITS finetuning for fast speaker adaptation TTS, and many-to-many voice conversion
togethercomputer/RedPajama-Data
The RedPajama-Data repository contains code for preparing large datasets for training large language models.
ali-vilab/AnyDoor
Official implementations for paper: Anydoor: zero-shot object-level image customization
mosaicml/llm-foundry
LLM training code for Databricks foundation models
PhoebusSi/Alpaca-CoT
We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts to initiate any meaningful PR on this repo and integrate as many LLM related technologies as possible. 我们打造了方便研究人员上手和使用大模型等微调平台,我们欢迎开源爱好者发起任何有意义的pr!
TigerResearch/TigerBot
TigerBot: A multi-language multi-task LLM
NVIDIA/TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
hkust-nlp/ceval
Official github repo for C-Eval, a Chinese evaluation suite for foundation models [NeurIPS 2023]
IEIT-Yuan/Yuan-2.0
Yuan 2.0 Large Language Model
alibaba/Pai-Megatron-Patch
The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.
zsbai/wechat-versions
保存微信历史版本
cognitivecomputations/laserRMT
This is our own implementation of 'Layer Selective Rank Reduction'
genggui001/Megatron-DeepSpeed-Llama
LydiaXiaohongLi/Megatron-DeepSpeed
Ongoing research training transformer language models at scale, including: BERT & GPT-2