jladd-mlnx's Stars
NVIDIA/Megatron-LM
Ongoing research training transformer models at scale
microsoft/tutel
Tutel MoE: An Optimized Mixture-of-Experts Implementation
openxla/stablehlo
Backward compatible ML compute opset inspired by HLO/MHLO
openucx/ucc
Unified Collective Communication Library
NVIDIA/cloudai
CloudAI Benchmark Framework
openucx/xccl
RIKEN-RCCS/hpl-ai
An HPL-AI implementation for Fugaku
facebookresearch/torch_ucc
Pytorch process group third-party plugin for UCC