/Tensorflow-gpu-DockerImage

Instruction for setting up tensorflow-gpu docker image with Nvidia support.

Tensorflow docker image

Aufgabe:

1, Ein Dockerimage erstellen;  
2, Die nötige Software einbinden (Folie 2+3);  
3, Das Tensorflow-Skript „Hello World“ via Docker ausführen (Folie 4);  
4, Das Vorgehen detailliert dokumentieren um daraus ein Tutorial erstellen zu können.  

Installation Platform: Ubuntu 18.04 LTS, Nvidia driver-390.48, docker 18.06.1-ce.

1, Install nvidia-docker

The nvidia-docker is a container runtime for docker, offering cuda and cudnn runtime support.
Installation

2, Configure Tensorflow-gpu image.

Download tensorflow-gpu image and start a container.
Instruction

2.1 Download tensorflow-gpu image

docker pull tensorflow/tensorflow:latest-gpu

2.2 Start a container with nvidia runtime support

//The container is named as "bash" in this example
docker run --runtime=nvidia -it --name=bash tensorflow/tensorflow:latest-gpu bash

3, Install Anaconda in container

3.1 Start container and attach it to console.

docker start bash
docker attach bash

3.2 Install Anaconda (Optional - not recommended)

apt install wget
cd /home
wget https://mirrors.tuna.tsinghua.edu.cn/anaconda/archive/Anaconda2-5.2.0-Linux-x86_64.sh
bash Anaconda2-5.2.0-Linux-x86_64.sh

After installation, create conda environment.

conda create -n tensorflow python=2.7

Then, install tensorflow-gpu support inside this env.

source activate tensorflow
conda install -c conda-forge tensorflow-gpu

Now, this tensorflow-gpu docker image is well prepared.
You can start the "bash" container, activate "tensorflow" environment and test the following simple script in python2:

import tensorflow as tf
import numpy as np
import PIL
import scipy

hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()

print(sess.run(hello))