/efficient-dl-systems

Efficient Deep Learning Systems course materials (HSE, YSDA)

Primary LanguageJupyter NotebookMIT LicenseMIT

Efficient Deep Learning Systems

This repository contains materials for the Efficient Deep Learning Systems course taught at the Faculty of Computer Science of HSE University and Yandex School of Data Analysis.

This branch corresponds to the ongoing 2024 course. If you want to see full materials of past years, see the "Past versions" section.

Syllabus

  • Week 1: Introduction
    • Lecture: Course overview and organizational details. Core concepts of the GPU architecture and CUDA API.
    • Seminar: CUDA operations in PyTorch. Introduction to benchmarking.
  • Week 2: Experiment tracking, model and data versioning, testing DL code in Python
    • Lecture: Experiment management basics and pipeline versioning. Configuring Python applications. Intro to regular and property-based testing.
    • Seminar: Example DVC+Weights & Biases project walkthrough. Intro to testing with pytest.
  • Week 3: Training optimizations, profiling DL code
    • Lecture: Mixed-precision training. Data storage and loading optimizations. Tools for profiling deep learning workloads.
    • Seminar: Automatic Mixed Precision in PyTorch. Dynamic padding for sequence data and JPEG decoding benchmarks. Basics of profiling with py-spy, PyTorch Profiler, PyTorch TensorBoard Profiler, nvprof and Nsight Systems.
  • Week 4: Basics of distributed ML
    • Lecture: Introduction to distributed training. Process-based communication. Parameter Server architecture.
    • Seminar: Multiprocessing basics. Parallel GloVe training.
  • Week 5: Data-parallel training and All-Reduce
    • Lecture: Data-parallel training of neural networks. All-Reduce and its efficient implementations.
    • Seminar: Introduction to PyTorch Distributed. Data-parallel training primitives.
  • Week 6: Training large models
    • Lecture: Model parallelism, gradient checkpointing, offloading, sharding.
    • Seminar: Gradient checkpointing and tensor parallelism in practice.
  • Week 7: Python web application deployment
    • Lecture/Seminar: Building and deployment of production-ready web services. App & web servers, Docker, Prometheus, API via HTTP and gRPC.
  • Week 8: LLM inference optimizations and software
    • Lecture: Inference speed metrics. KV caching, batch inference, continuous batching. FlashAttention with its modifications and PagedAttention. Overview of popular LLM serving frameworks.
    • Seminar: Basics of the Triton language. Layer fusion in PyTorch and Triton. Implementation of KV caching, FlashAttention in practice.
  • Week 9: Efficient model inference
    • Lecture: Hardware utilization metrics for deep learning. Knowledge distillation, quantization, LLM.int8(), SmoothQuant, GPTQ. Efficient model architectures. Speculative decoding.
    • Seminar: Measuring Memory Bandwidth Utilization in practice. Data-free quantization, GPTq, and SmoothQuant in PyTorch.
  • Week 10: Guest lecture

Grading

There will be several home assignments (spread over multiple weeks) on the following topics:

  • Training pipelines and code profiling
  • Distributed and memory-efficient training
  • Deploying and optimizing models for production

The final grade is a weighted sum of per-assignment grades. Please refer to the course page of your institution for details.

Staff

Past versions