zhiqiu's Stars
mlc-ai/mlc-llm
Universal LLM Deployment Engine with ML Compilation
karpathy/LLM101n
LLM101n: Let's build a Storyteller
AFDWang/Hetu-Galvatron
Galvatron is an automatic distributed training system designed for Transformer models, including Large Language Models (LLMs). If you have any interests, please visit/star/fork https://github.com/PKU-DAIR/Hetu-Galvatron
xai-org/grok-1
Grok open release
NVIDIA/TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
CompVis/stable-diffusion
A latent text-to-image diffusion model
Significant-Gravitas/AutoGPT
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
openxla/xla
A machine learning compiler for GPUs, CPUs, and ML accelerators
FlagAI-Open/Aquila2
The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.
ivy-llc/ivy
Convert Machine Learning Code Between Frameworks
microsoft/Megatron-DeepSpeed
Ongoing research training transformer language models at scale, including: BERT & GPT-2
Dao-AILab/flash-attention
Fast and memory-efficient exact attention
milvus-io/milvus
Milvus is a high-performance, cloud-native vector database designed to scale vector search.
mosaicml/composer
Supercharge Your Model Training
alpa-projects/alpa
Training and serving large-scale neural networks with auto parallelization.
facebookresearch/xformers
Hackable and optimized Transformers building blocks, supporting a composable construction.
mli/paper-reading
深度学习经典、新论文逐段精读
Vision-CAIR/MiniGPT-4
Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)
NVIDIA/NeMo-Framework-Launcher
Provides end-to-end model development pipelines for LLMs and Multimodal models that can be launched on-prem or cloud-native.
NVIDIA/Megatron-LM
Ongoing research training transformer models at scale
merrymercy/awesome-tensor-compilers
A list of awesome compiler projects and papers for tensor computation and deep learning.
triton-lang/triton
Development repository for the Triton language and compiler
PaddlePaddle/PaddleHub
Awesome pre-trained models toolkit based on PaddlePaddle. (400+ models including Image, Text, Audio, Video and Cross-Modal with Easy Inference & Serving)【安全加固,暂停交互,请耐心等待】
facebookincubator/AITemplate
AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
Schweinepriester/github-profile-achievements
A collection listing all Achievements available on the GitHub profile 🏆
carbon-language/carbon-lang
Carbon Language's main repository: documents, design, implementation, and related tools. (NOTE: Carbon Language is experimental; see README)
ipython/ipython
Official repository for IPython itself. Other repos in the IPython organization contain things like the website, documentation builds, etc.
PaddlePaddle/Paddle
PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)
intel/pcm
Intel® Performance Counter Monitor (Intel® PCM)
PaddlePaddle/CINN
Compiler Infrastructure for Neural Networks