/dl_scaling

Scaling Deep learning on HPC systems

Primary LanguagePythonBSD 2-Clause "Simplified" LicenseBSD-2-Clause

Scaling Deep Learning on HPC Systems

Overview

In the rapidly evolving field of deep learning, distributed training frameworks play a vital role in enhancing computational efficiency. In this project, we focus on building several models using three prominent frameworks - PyTorch DDP, Horovod, and DeepSpeed - specifically tailored for usage with the ALCF. This work aims to provide valuable insights and practical examples to guide machine learning practitioners in selecting the most suitable distributed training framework for their particular needs within the ALCF environment.

Explore the Methods

Each method within this repository is well-documented and accompanied by a detailed README file. Click on the links below to explore each method and follow the instructions to get started:

  1. MNIST
  2. ResNet50
  3. VLTVG
  4. Cosmoflow

Support and Contribution

We welcome contributions, questions, and feedback. Please feel free to open an issue or submit a pull request.

License

This project is licensed under the BSD 2-Clause License - see the LICENSE.md file for details.