/optimus

A Deep Learning Cluster Scheduler

Primary LanguagePython

Optimus

Optimus is a customized cluster scheduler for deep learning training jobs that targets high job performance and resource efficiency in production clusters. It builds resource-performance models for each job on the go, and dynamically schedules resources to jobs based on job progress and the cluster load to maximize training performance and resource efficiency. The implementation uses MXNet as the distributed training framework and schedules jobs based on Kubernetes.

Setup

Software Environment

(1) Ubuntu 14.04.5 Server 64bit LTS;

(2) HDFS 2.8;

(3) Docker 17.06.0-ce;

(4) Kubernetes 1.7;

(5) NVIDIA Driver version >= 375.66;

(6) CUDA version >= 8.0.61;

(7) CuDNN Library version >= 6.0

See docs for installation guide.

Container Environment

MXNet GPU container (if the server has NVIDIA GPUs): see images

Usage

The PS load balance algorithm and code are in mxnet. The scheduling code is in scheduler. Before running experimentor.py, make sure hyper-parameters in params.py are correct.

Please use the images for running, or you can build your own by copying the scripts into your image. These scripts are for parsing training logs and collecting training speed, loss, accuracy etc.

All training examples (e.g., image classification) in the paper are from the open source community. Most are from MXNet official examples and you can find how to run these examples (e.g., preparing the training data and starting training) there. The machine translation example is from sockeye.

More

Read the Optimus paper and the morning report for details.

Contact yhpeng@cs.hku.hk if you have any questions.