Pinned Repositories
Code-record
本科一些简单的代码记录。
Machine-Learning
Megatron-LM
Ongoing research training transformer models at scale
nccl
Optimized primitives for collective multi-GPU communication
trans
translorahub
useful_code
Megatron-LM
Ongoing research training transformer models at scale
TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
lorahub
[COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition
Weifan1226's Repositories
Weifan1226/trans
Weifan1226/translorahub
Weifan1226/useful_code
Weifan1226/Megatron-LM
Ongoing research training transformer models at scale
Weifan1226/nccl
Optimized primitives for collective multi-GPU communication
Weifan1226/Machine-Learning
Weifan1226/Code-record
本科一些简单的代码记录。