WHU-ZQH's Stars
Significant-Gravitas/AutoGPT
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
nomic-ai/gpt4all
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
xtekky/gpt4free
The official gpt4free repository | various collection of powerful language models
meta-llama/llama
Inference code for Llama models
microsoft/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
hiyouga/LLaMA-Factory
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
tloen/alpaca-lora
Instruct-tune LLaMA on consumer hardware
ymcui/Chinese-LLaMA-Alpaca
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
Hannibal046/Awesome-LLM
Awesome-LLM: a curated list of Large Language Model
BlinkDL/RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
togethercomputer/OpenChatKit
openlm-research/open_llama
OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
OpenGVLab/LLaMA-Adapter
[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
togethercomputer/RedPajama-Data
The RedPajama-Data repository contains code for preparing large datasets for training large language models.
Instruction-Tuning-with-GPT-4/GPT-4-LLM
Instruction Tuning with GPT-4
yizhongw/self-instruct
Aligning pretrained language models with instruction data generated by themselves.
young-geng/EasyLM
Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flax.
FreedomIntelligence/Medical_NLP
Medical NLP Competition, dataset, large models, paper
IST-DASLab/gptq
Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".
AetherCortex/Llama-X
Open Academic Research on Improving LLaMA to SOTA LLM
gururise/AlpacaDataCleaned
Alpaca dataset from Stanford, cleaned and curated
juncongmoo/chatllama
ChatLLaMA 📢 Open source implementation for LLaMA-based ChatGPT runnable in a single GPU. 15x faster training process than ChatGPT
AI-in-Health/MedLLMsPracticalGuide
A curated list of practical guide resources of Medical LLMs (Medical LLMs Tree, Tables, and Papers)
bigscience-workshop/bigscience
Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.
GaryYufei/AlignLLMHumanSurvey
Aligning Large Language Models with Human: A Survey
bigscience-workshop/xmtf
Crosslingual Generalization through Multitask Finetuning
zphang/minimal-llama
taoyds/test-suite-sql-eval
Semantic Evaluation for Text-to-SQL with Distilled Test Suites
shreyansh26/Speculative-Sampling
Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind
WHU-ZQH/Speculative-Sampling
Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind