Authors: Haohan Wang & Christian Fanli Ramsey > dyad x machina
This course is aimed at intermediate machine learning engineers, DevOps, technology architects and programmers who are interested in knowing more about deep learning, especially applied deep learning, TensorFlow, Google Cloud and Keras. We ares here to give you the skills to analyze large volumes of data in distributed ways for a production level system. After the course, you will be able to have a solid background in how to scale-out machine learning algorithms in general and deep learning in particular.
We have designed the course to provide you with the right blend of hands-on, theory and best practices in this rapidly developing area while providing grounding in essential concepts which remain timeless.
Tools and frameworks like, Keras
, TensorFlow
, and Google Cloud
are used to showcase the strengths of various approaches, trade-offs and building blocks for creating real-world examples of distributed deep learning models.
- Link to Packt Publishing: https://www.packtpub.com/big-data-and-business-intelligence/distributed-deep-learning-video
This course is for intermediate machine learners like you who want to learn more about deep learning, how to scale out your deep learning model, and then quickly turn around and use the tools and techniques you are about to learn from this course to solve your tricky deep learning tasks.
You will be successful in this course if you have a basic knowledge of computer programming especially Python programming language. Also some familiarity with deep learning like neural networks will be helpful.
In this course, you will need a Google Cloud free tier account. Note that you won't be charged by creating the account. Instead, you can get $300
credit to spend on Google Cloud Platform for 12 months and access to the Always Free tier to try participating products at no charge. By going through this course, you will probably need to spend at most $50
out of your $300
free credit.
- Keras
- TensorFlow low and high level
- Google Cloud MLE
- Keras
2.1.6
- TensorFlow
1.8
- Google Cloud MLE
latest
sudo pip install keras
sudo pip install tensorflow-gpu
OR
sudo pip install tensorflow
Link: https://cloud.google.com/sdk/
Installation details will be explained in Section III
Christian Fanli Ramsey
- Github: https://github.com/christianramsey
- LinkedIn : https://www.linkedin.com/in/christianramsey/
- Tumblr : https://www.tumblr.com/blog/anthrochristianramsey
- Medium : https://medium.com/@christianramsey
- Twitter : https://twitter.com/christianramsey
Haohan Wang
- Github: https://github.com/haohanwang23
- LinkedIn : https://www.linkedin.com/in/haohanw/
- Tumblr : https://www.tumblr.com/blog/haohanwang
- Medium : https://medium.com/@haohanwang
DyadxMachina
- Website: https://dyadxmachina.com
PREPARATION - Installation and Setup
- Nvidia Setup
- Anaconda Setup
- TensorFlow GPU and Google Cloud
- Requirements
SECTION I – Deep Learning with Keras
- 1.1 Keras Introduction
- 1.2 Review of backends Theano, TensorFlow, and Mxnet
- 1.3 Design and compile a model
- 1.4 Keras Model Training, Evaluation and Prediction
- 1.5 Training with augmentation
- 1.6 Training Image data on the disk with Transfer Learning and Data augmentation
SECTION II – Scaling Deep Learning using Keras and TensorFlow
- 2.1 Tensorflow Introduction
- 2.2 Tensorboard Introduction
- 2.3 Types of Parallelism in Deep Learning – Synchronous vs Asynchronous
- 2.4 Distributed Deep Learning with TensorFlow
- 2.5 Configuring Keras to use TensorFlow for distributed problems
SECTION III - Distributed Deep Learning with Google Cloud MLE
- 3.1 Representing data in TensorFlow
- 3.2 Diving into Estimators
- 3.3 Creating your Data Input Pipeline
- 3.4 Creating your Estimator
- 3.5 Packaging your model/trajectory
- 3.6 Training in the Cloud
- 3.7 Automated Hyperparameter Tuning
- 3.9 Deploying your Model to the Cloud for Prediction
- 3.10 Creating your Machine Learning API
Visit our website dyadxmachina.com
Haohan Wang: haohan723@gmail.com
Christian Fanli Ramsey: thechristianramsey@gmail.com