Pinned Repositories
ColossalAI
Making large AI models cheaper, faster and more accessible
deepLearningSystem2022
Homework for Deep Learning Systems 2022.
flash-attention
Fast and memory-efficient exact attention
InternEvo
InternLM
InternLM has open-sourced a 7 billion parameter base model, a chat model tailored for practical scenarios and the training system.
leetcode-master
《代码随想录》LeetCode 刷题攻略:200道经典题目刷题顺序,共60w字的详细图解,视频难点剖析,50余张思维导图,支持C++,Java,Python,Go,JavaScript等多语言版本,从此算法学习不再迷茫!🔥🔥 来看看,你会发现相见恨晚!🚀
MIT6.S081-2020-labs
MIT6.S081实验官方纯净源代码,转载于MIT官方仓库git clone git://g.csail.mit.edu/xv6-labs-2020,由于GitHub上没有放出2020版本的MIT6.S081的实验源代码仓库,故在此转载一下,方便大家Fork,也方便我自己使用
MLCnotebooks
the forked repository of MLC
XYT-DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
XYT-Megatron-DeepSpeed
Ongoing research training transformer language models at scale, including: BERT & GPT-2
yingtongxiong's Repositories
yingtongxiong/deepLearningSystem2022
Homework for Deep Learning Systems 2022.
yingtongxiong/ColossalAI
Making large AI models cheaper, faster and more accessible
yingtongxiong/flash-attention
Fast and memory-efficient exact attention
yingtongxiong/InternEvo
yingtongxiong/InternLM
InternLM has open-sourced a 7 billion parameter base model, a chat model tailored for practical scenarios and the training system.
yingtongxiong/leetcode-master
《代码随想录》LeetCode 刷题攻略:200道经典题目刷题顺序,共60w字的详细图解,视频难点剖析,50余张思维导图,支持C++,Java,Python,Go,JavaScript等多语言版本,从此算法学习不再迷茫!🔥🔥 来看看,你会发现相见恨晚!🚀
yingtongxiong/MIT6.S081-2020-labs
MIT6.S081实验官方纯净源代码,转载于MIT官方仓库git clone git://g.csail.mit.edu/xv6-labs-2020,由于GitHub上没有放出2020版本的MIT6.S081的实验源代码仓库,故在此转载一下,方便大家Fork,也方便我自己使用
yingtongxiong/MLCnotebooks
the forked repository of MLC
yingtongxiong/pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
yingtongxiong/udacity-cs344
Google Colab Notebooks for Udacity CS344 - Intro to Parallel Programming
yingtongxiong/XYT-DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
yingtongxiong/XYT-Megatron-DeepSpeed
Ongoing research training transformer language models at scale, including: BERT & GPT-2