Pinned Repositories
AMSP
Awesome-LLM-Training-System
ChenQiaoling00.github.io
AcadHomepage: A Modern and Responsive Academic Personal Homepage
DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
DeepSpeedExamples
Example models using DeepSpeed
Distributed-Mamba
Mamba SSM architecture
flash-linear-attention
Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton
InternEvo
Seq1F1B
Sequence-level 1F1B schedule for LLMs.
The-Art-of-Linear-Algebra
Graphic notes on Gilbert Strang's "Linear Algebra for Everyone"
ChenQiaoling00's Repositories
ChenQiaoling00/AMSP
ChenQiaoling00/Awesome-LLM-Training-System
ChenQiaoling00/ChenQiaoling00.github.io
AcadHomepage: A Modern and Responsive Academic Personal Homepage
ChenQiaoling00/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
ChenQiaoling00/DeepSpeedExamples
Example models using DeepSpeed
ChenQiaoling00/Distributed-Mamba
Mamba SSM architecture
ChenQiaoling00/flash-linear-attention
Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton
ChenQiaoling00/InternEvo
ChenQiaoling00/Seq1F1B
Sequence-level 1F1B schedule for LLMs.
ChenQiaoling00/The-Art-of-Linear-Algebra
Graphic notes on Gilbert Strang's "Linear Algebra for Everyone"
ChenQiaoling00/TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.