A Docker image for GPU-enabled Keras and PyTorch notebook. Image is loaded with CUDA 10.0.
NVIDIA drivers, Docker and NVIDIA Docker are assumed to be properly installed.
Once all dependencies are properly installed, the docker image can be simply "installed" with command:
$ docker pull wudaown/gpu-jupyter-keras-pytorch:latest
Note that this is also the command for upgrading.
Alternatively, one can directly run
$ nvidia-docker run -it --rm wudaown/gpu-jupyter-keras-pytorch:latest nvidia-smi
A docker pull
will be automatically triggered by this command. This will show a summary table for the NVIDIA GPU status if the docker image is successfully running on your machine.
Launch the container with jupyter in background:
$ nvidia-docker run --it -d -p 8888:8888 -v /path/to/persistent/dir:/root/workspace wudaown/gpu-jupyter-keras-pytorch
where -p 8888:8888
denotes the port mapping from host to container in the format of -p hostPort:containerPort
.
By default, this will use TensorFlow as backend. If you prefer theano as backend, you can add an environment variable with:
$ nvidia-docker run -it --rm -e KERAS_BACKEND='theano' wudaown/gpu-jupyter-keras-pytorch bash