1, Ein Dockerimage erstellen;
2, Die nötige Software einbinden (Folie 2+3);
3, Das Tensorflow-Skript „Hello World“ via Docker ausführen (Folie 4);
4, Das Vorgehen detailliert dokumentieren um daraus ein Tutorial erstellen zu können.
The nvidia-docker is a container runtime for docker, offering cuda and cudnn runtime support.
Installation
Download tensorflow-gpu image and start a container.
Instruction
docker pull tensorflow/tensorflow:latest-gpu
//The container is named as "bash" in this example
docker run --runtime=nvidia -it --name=bash tensorflow/tensorflow:latest-gpu bash
docker start bash
docker attach bash
apt install wget
cd /home
wget https://mirrors.tuna.tsinghua.edu.cn/anaconda/archive/Anaconda2-5.2.0-Linux-x86_64.sh
bash Anaconda2-5.2.0-Linux-x86_64.sh
After installation, create conda environment.
conda create -n tensorflow python=2.7
Then, install tensorflow-gpu support inside this env.
source activate tensorflow
conda install -c conda-forge tensorflow-gpu
Now, this tensorflow-gpu docker image is well prepared.
You can start the "bash" container, activate "tensorflow" environment and test the following simple script in python2:
import tensorflow as tf
import numpy as np
import PIL
import scipy
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))