Pinned Repositories
MtmEx2Tester
Bias-Mitigation-Through-Topic-Aware-Distribution-Matching
Megatron-LM
Ongoing research training transformer models at scale
NPB-CPP
The NAS Parallel Benchmarks for evaluating C++ parallel programming frameworks on shared-memory architectures
pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
HodBadichi's Repositories
HodBadichi/Bias-Mitigation-Through-Topic-Aware-Distribution-Matching
HodBadichi/Megatron-LM
Ongoing research training transformer models at scale
HodBadichi/NPB-CPP
The NAS Parallel Benchmarks for evaluating C++ parallel programming frameworks on shared-memory architectures
HodBadichi/pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
HodBadichi/TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.