It is an Ansible role to:
- Install Docker (editions, channels and version pinning are all supported)
- Install Docker Compose v1 and Docker Compose v2 (version pinning is supported)
- Install the
docker
PIP package so Ansible'sdocker_*
modules work - Manage Docker registry login credentials
- Configure 1 or more users to run Docker without needing root access
- Configure the Docker daemon's options and environment variables
- Configure a cron job to run Docker clean up commands
If you're like me, you probably love Docker. This role provides everything you need to get going with a production ready Docker host.
By the way, if you don't know what Docker is, or are looking to become an expert with it then check out Dive into Docker: The Complete Docker Course for Developers.
- Ubuntu 20.04 LTS (Focal Fossa)
- Ubuntu 22.04 LTS (Jammy Jellyfish)
- Debian 10 (Buster)
- Debian 11 (Bullseye)
You are viewing the master branch's documentation which might be ahead of the latest release. Switch to the latest release.
The philosophy for all of my roles is to make it easy to get going, but provide a way to customize nearly everything.
The latest Docker CE, Docker Compose v1 and Docker Compose v2 will be
installed, Docker disk clean up will happen once a week and Docker container
logs will be sent to journald
.
---
# docker.yml
- name: Example
hosts: "all"
become: true
roles:
- role: "nickjj.docker"
tags: ["docker"]
Usage: ansible-playbook docker.yml
$ ansible-galaxy install nickjj.docker
Do you want to use "ce" (community edition) or "ee" (enterprise edition)?
docker__edition: "ce"
Do you want to use the "stable", "edge", "testing" or "nightly" channels? You can add more than one (order matters).
docker__channel: ["stable"]
- When set to "", the current latest version of Docker will be installed
- When set to a specific version, that version of Docker will be installed and pinned
docker__version: ""
# For example, pin it to 20.10.
docker__version: "20.10"
# For example, pin it to a more precise version of 20.10.
docker__version: "20.10.17"
Pins are set with *
at the end of the package version so you will end up
getting minor and security patches unless you pin an exact version.
- When set to
"present"
, running this role in the future won't install newer versions (if available) - When set to
"latest"
, running this role in the future will install newer versions (if available)
docker__state: "present"
The easiest way to downgrade would be to uninstall the Docker package manually and then run this role afterwards while pinning whatever specific Docker version you want.
# An ad-hoc Ansible command to stop and remove the Docker CE package on all hosts.
ansible all -m systemd -a "name=docker-ce state=stopped" \
-m apt -a "name=docker-ce autoremove=true purge=true state=absent" -b
Docker Compose v2 will get apt installed using the official
docker-compose-plugin
that Docker manages.
- When set to "", the current latest version of Docker Compose v2 will be installed
- When set to a specific version, that version of Docker Compose v2 will be installed and pinned
docker__compose_v2_version: ""
# For example, pin it to 2.6.
docker__compose_v2_version: "2.6"
# For example, pin it to a more precise version of 2.6.
docker__compose_v2_version: "2.6.0"
It'll re-use the docker__state
variable explained above in the Docker section
with the same rules.
Like Docker itself, the easiest way to uninstall Docker Compose v2 is to manually run the command below and then pin a specific Docker Compose v2 version.
# An ad-hoc Ansible command to remove the Docker Compose Plugin package on all hosts.
ansible all -m apt -a "name=docker-compose-plugin autoremove=true purge=true state=absent" -b
Docker Compose v1 will get PIP installed inside of a Virtualenv. If you plan to
use Docker Compose v2 instead it will be very easy to skip installing v1
although technically both can be installed together since v1 is accessed with
docker-compose
and v2 is accessed with docker compose
(notice the lack of
hyphen).
In any case details about this is covered in detail in a later section of this README file.
- When set to "", the current latest version of Docker Compose v1 will be installed
- When set to a specific version, that version of Docker Compose v1 will be installed and pinned
docker__compose_version: ""
# For example, pin it to 1.29.
docker__compose_version: "1.29"
# For example, pin it to a more precise version of 1.29.
docker__compose_version: "1.29.2"
Upgrade and downgrade strategies will be explained in the other section of this README.
A list of users to be added to the docker
group.
Keep in mind this user needs to already exist, this role will not create it. If you want to create users, check out my user role.
This role does not configure User Namespaces or any other security features
by default. If the user you add here has SSH access to your server then you're
effectively giving them root access to the server since they can run Docker
without sudo
and volume mount in any path on your file system.
In a controlled environment this is safe, but like anything security related
it's worth knowing this up front. You can enable User Namespaces and any
other options with the docker__daemon_json
variable which is explained later.
# Try to use the sudo user by default, but fall back to root.
docker__users: ["{{ ansible_env.SUDO_USER | d('root') }}"]
# For example, if the user you want to set is different than the sudo user.
docker__users: ["admin"]
Login to 1 or more Docker registries (such as the Docker Hub).
# Your login credentials will end up in this user's home directory.
docker__login_become_user: "{{ docker__users | first | d('root') }}"
# 0 or more registries to log into.
docker__registries:
- #registry_url: "https://index.docker.io/v1/"
username: "your_docker_hub_username"
password: "your_docker_hub_password"
#email: "your_docker_hub@emailaddress.com"
#reauthorize: false
#config_path: "$HOME/.docker/config.json"
#state: "present"
docker__registries: []
Properties prefixed with * are required.
registry_url
defaults tohttps://index.docker.io/v1/
- *
username
is your Docker registry username - *
password
is your Docker registry password email
defaults to not being used (not all registries use it)reauthorize
defaults tofalse
, whentrue
it updates your credentialsconfig_path
defaults to yourdocker__login_become_user
's$HOME
directorystate
defaults to "present", when "absent" the login will be removed
Default Docker daemon options as they would appear in /etc/docker/daemon.json
.
docker__default_daemon_json: |
"log-driver": "journald",
"features": {
"buildkit": true
}
# Add your own additional daemon options without overriding the default options.
# It follows the same format as the default options, and don't worry about
# starting it off with a comma. The template will add the comma if needed.
docker__daemon_json: ""
Flags that are set when starting the Docker daemon cannot be changed in the
daemon.json
file. By default Docker sets -H unix://
which means that option
cannot be changed with the json options.
Add or change the starting Docker daemon flags by supplying them exactly how they would appear on the command line.
# Each command line flag should be its own item in the list.
#
# Using a Docker version prior to 18.09?
# You must set `-H fd://` instead of `-H unix://`.
docker__daemon_flags:
- "-H unix://"
If you don't supply some type of -H
flag here, Docker will fail to start.
docker__daemon_environment: []
# For example, here's how to set a couple of proxy environment variables.
docker__daemon_environment:
- "HTTP_PROXY=http://proxy.example.com:80"
- "HTTPS_PROXY=https://proxy.example.com:443"
This role lets the Docker package manage its own systemd unit file and adjusts things like the Docker daemon flags and environment variables by using the systemd override pattern.
If you know what you're doing, you can override or add to any of Docker's systemd
directives by setting this variable. Anything you place in this string will be
written to /etc/systemd/system/docker.service.d/custom.conf
as is.
docker__systemd_override: ""
By default this will safely clean up disk space used by Docker every Sunday at midnight.
# `a` removes unused images (useful in production).
# `f` forces it to happen without prompting you to agree.
docker__cron_jobs_prune_flags: "af"
# Control the schedule of the docker system prune.
docker__cron_jobs_prune_schedule: ["0", "0", "*", "*", "0"]
docker__cron_jobs:
- name: "Docker disk clean up"
job: "docker system prune -{{ docker__cron_jobs_prune_flags }} > /dev/null 2>&1"
schedule: "{{ docker__cron_jobs_prune_schedule }}"
cron_file: "docker-disk-clean-up"
#user: "{{ (docker__users | first) | d('root') }}"
#state: "present"
Properties prefixed with * are required.
- *
name
is the cron job's description - *
job
is the command to run in the cron job - *
schedule
is the standard cron job format for every Sunday at midnight - *
cron_file
writes a cron file to/etc/cron.d
instead of a user's individual crontab user
defaults to the firstdocker__users
user or root if that's not availablestate
defaults to "present", when "absent" the cron file will be removed
Docker requires a few dependencies to be installed for it to work. You shouldn't have to edit any of these variables.
# List of packages to be installed.
docker__package_dependencies:
- "apt-transport-https"
- "ca-certificates"
- "cron"
- "gnupg2"
- "software-properties-common"
# Ansible identifies CPU architectures differently than Docker.
docker__architecture_map:
"x86_64": "amd64"
"aarch64": "arm64"
"aarch": "arm64"
"armhf": "armhf"
"armv7l": "armhf"
# The Docker GPG key id used to sign the Docker package.
docker__apt_key_id: "9DC858229FC7DD38854AE2D88D81803C0EBFCD88"
# The Docker GPG key server address.
docker__apt_key_url: "https://download.docker.com/linux/{{ ansible_distribution | lower }}/gpg"
# The Docker upstream APT repository.
docker__apt_repository: >
deb [arch={{ docker__architecture_map[ansible_architecture] }}]
https://download.docker.com/linux/{{ ansible_distribution | lower }}
{{ ansible_distribution_release }} {{ docker__channel | join (' ') }}
Rather than pollute your server's version of Python, all PIP packages are installed into a Virtualenv of your choosing.
docker__pip_virtualenv: "/usr/local/lib/docker/virtualenv"
This role installs PIP because Docker Compose v1 is installed with the
docker-compose
PIP package and Ansible's docker_*
modules use the docker
PIP package.
docker__pip_dependencies:
- "gcc"
- "python3-setuptools"
- "python3-dev"
- "python3-pip"
- "virtualenv"
docker__default_pip_packages:
- name: "docker"
state: "{{ docker__pip_docker_state }}"
- name: "docker-compose"
version: "{{ docker__compose_version }}"
path: "/usr/local/bin/docker-compose"
src: "{{ docker__pip_virtualenv + '/bin/docker-compose' }}"
state: "{{ docker__pip_docker_compose_state }}"
# Add your own PIP packages with the same properties as above.
docker__pip_packages: []
Properties prefixed with * are required.
- *
name
is the package name version
is the package version to be installed (or "" if this is not defined)path
is the destination path of the symlinksrc
is the source path to be symlinkedstate
defaults to "present", other values can be "forcereinstall" or "absent"
- When set to
"present"
, the package will be installed but not updated on future runs - When set to
"forcereinstall"
, the package will always be (re)installed and updated on future runs - When set to
"absent"
, the package will be removed
docker__pip_docker_state: "present"
docker__pip_docker_compose_state: "present"
You can set docker__pip_docker_compose_state: "absent"
in your inventory.
That's it!
Honestly, in the future I think this will be the default behavior. Since Docker Compsose v2 is still fairly new I wanted to ease into using v2. There's also no harm in having both installed together. You can pick which one to use.
This role uses docker_login
to login to a Docker registry, but you may also
use the other docker_*
modules in your own roles. They are not going to work
unless you instruct Ansible to use this role's Virtualenv.
At either the inventory, playbook or task level you'll need to set
ansible_python_interpreter: "/usr/bin/env python3-docker"
. This works because
this role creates a proxy script from the Virtualenv's Python binary to
python3-docker
.
You can look at this role's docker_login
task as an example on how to do it
at the task level.
MIT