Pinned Repositories
DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Gaudi-tutorials
Tutorials for running models on First-gen Gaudi and Gaudi2 for Training and Inference. The source files for the tutorials on https://developer.habana.ai/
Habana_Custom_Kernel
Provides the examples to write and build Habana custom kernels using the HabanaTools
hccl_demo
Megatron-DeepSpeed
Intel Gaudi's Megatron DeepSpeed Large Language Models for training
Model-References
Reference models for Intel(R) Gaudi(R) AI Accelerator
Setup_and_Install
Setup and Installation Instructions for Habana binaries, docker image creation
SynapseAI_Core
SynapseAI Core is a reference implementation of the SynapseAI API running on Habana Gaudi
tpc_llvm
TPC-CLANG compiler that compiles a TPC C programming language which is used in HabanaLabs Deep-Learning Accelerators
vllm-fork
A high-throughput and memory-efficient inference and serving engine for LLMs
Intel® Gaudi® AI Accelerator 's Repositories
HabanaAI/SynapseAI_Core
SynapseAI Core is a reference implementation of the SynapseAI API running on Habana Gaudi
HabanaAI/tpc_llvm
TPC-CLANG compiler that compiles a TPC C programming language which is used in HabanaLabs Deep-Learning Accelerators
HabanaAI/Gaudi-solutions
Full End-to-End examples showing how to use First-gen Gaudi and Gaudi2 in common use cases
HabanaAI/deepspeed_old
HabanaAI/DL1-Workshop
HabanaAI/pyhlml
HabanaAI/pytorch-lightning
The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.
HabanaAI/Snapshot_For_Debug
Snapshot scripts for gathering information about the model and Habana training session for Habana analysis and debug