- Ansible
Ansible is a tool that helps to automate IT tasks. Such task may include installing, updating and configuring software and services.
- Ansible Project
- Modules Intro
- Modules Index
- Patterns - Targeting hosts and groups
- Roles
- Using Variables
- Ansible Vault
- SSH Pipelining
- Yaml
- Jinja
- Ansible Best Practices Essentials
- Ansible Best Practices
- Mastering loops with j2 templates
Installation depends on control node configuration. For example on Ubuntu the preferred way to install Ansible is to use the system package manager, in this case apt
. Wherease on Mac OS X the preferred method is to install via python package manager pip
.
Therefore, best way is to always consult the Installing Ansible section of available at official documentation.
One of the common ways is to install Ansible using Python's package manager pip
. I highly recommend installing inside a virtual environment, for example using the Simple Python Version Mamangement and pyenv-virtualenv tool.
# Download and Install Python 3.9.6
pyenv install 3.9.6
# Create and activate virtual environment
pyenv virtualenv 3.9.6 ansible-cookbook
pyenv activate ansible-cookbook
# Update pip, setuptools and install:
# - ansible
# - vagranttoansible (creates inventory from vagrant environment)
# - ansible-lint (checks for best practices)
pip install --upgrade pip setuptools
pip install -r requirements.txt
# Verify ansible installation
ansible --version
ansible [core 2.11.4]
config file = /home/mkukan/code/maroskukan/ansible-cookbook/ansible.cfg
configured module search path = ['/home/mkukan/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/mkukan/.pyenv/versions/3.9.6/envs/ansible-cookbook/lib/python3.9/site-packages/ansible
ansible collection location = /home/mkukan/.ansible/collections:/usr/share/ansible/collections
executable location = /home/mkukan/.pyenv/versions/ansible-cookbook/bin/ansible
python version = 3.9.6 (default, Aug 21 2021, 19:18:25) [GCC 9.3.0]
jinja version = 3.0.1
libyaml = True
# Finally to deactivate virtual environment
pyenv deactivate ansible-cookbook
Note: If your project directory has .python-version
with name of the virtual environment defined, it will get automatically activated when you are in this directory.
The behavior of Ansible installation can be adjusted by modifying settings in Ansible configuration file. Ansible chooses its current configuration from one of serveral possible locations. The following order applies:
- The
ANSIBLE_CONFIG
environment variable. - The
./ansible.cfg
in ansible command current working directory - The
~/.ansible.cfg
located in your home folder - The
/etc/ansible/ansible.cfg
the default installation folder
To verify which location of ansible configuraiton file is being used when calling ansible commands, use the ansible --version
command.
To verify the content of ansible configuration file that is being used, use the ansible-config view
command. One example of such configuration file is displayed below:
[defaults]
remote_user = devops
inventory = environments/prod
retry_files_save_path = /tmp
host_key_checking = False
log_path=~/ansible.log
To display full configuration, including defaults, you can use the ansible-config dump
command.
ACTION_WARNINGS(default) = True
AGNOSTIC_BECOME_PROMPT(default) = True
ALLOW_WORLD_READABLE_TMPFILES(default) = False
ANSIBLE_CONNECTION_PATH(default) = None
ANSIBLE_COW_PATH(default) = None
ANSIBLE_COW_SELECTION(default) = default
ANSIBLE_COW_WHITELIST(default) = ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon']
ANSIBLE_FORCE_COLOR(default) = False
Each ansible configuration ansible.cfg
containes one ore more section titles enclosed in square brackets. Each section contains settings defined as key-valye pair.
Basic operations use two main sections:
[defaults]
sets defaults for Ansible operation, for example connection settings.[privilege_escalation]
configures how Ansible performs privileges escalation on managed hosts.
[defaults]
host_key_checking = False
inventory = ./inventory
There are many other settings that can be defined in [defaults]
section, for example:
remote_user
specifies the user you want to use on the managed hosts. If unspecified, the current user name will be used.remote_port
specifies which sshd port you want to use on the managed hosts. If unspecified, the default port is 22.ask_pass
controls whether Ansible will prompt you for the SSH password. If unspecified, it is assumed that you are using SSH key-based authentication.
The settings that can be defined in [privilege_escalation]
section, for example:
become
controls whether you will automatically use privilege escalation. Default isno
.become_user
controls which user on the managed host Ansible should become (Default isroot
)become_method
controls how Ansible will become that user (usingsudo
by default, there are other options likesu
)become_ask_pass
controls whether to prompt you for a password for your become method (default isno
)
Please note that many settings can be overrided at inventory level if required.
To view all available settings with their explanation use ansible-config list
command.
To view all values (including default) for current setting use the ansible-config dump
command. To view only values that we changed use ansible-config dump --only-changed
.
As mentioned in section before, settings can be overrided at inventory level by setting connection variables. There ware multiple ways to accomplish this:
- Place the settings in a file in the
host_vars
directory in the same directory as your inventory file - These settings override the ones in
ansible.cfg
- They also have slightly different syntax and naming. For example
remote_user
(global) vsansible_user
(inventory)
The Control Machine is server where Ansible is installed. In order to utilizede SSH key-based authentication it is required to generate a key pair, and distribute the public key to each remote node by storing it in authorized_keys
file.
Although the default Ansible mode of operation which uses push model, does not require any installed agent present on the manage hosts, there are some required settings that need to be set in order for host to be managed by Ansible control node:
- SSH key-based authentication to an unprivileged account that can use
sudo
to becomeroot
without a password. - Ansible allows further flexibility to meet your current security policy
More details on how to setup both, Control Machine and Remote node can be found in this Medium article
From documenation, Modules (also referred to as “task plugins” or “library plugins”) are discrete units of code that can be used from the command line or in a playbook task. Ansible executes each module, usually on the remote managed node, and collects return values. In Ansible 2.10 and later, most modules are hosted in collections.
To display all installed modeles on system use the ansible-doc -l
command. The name and the description of module is displayed. To display information about a particular module use ansible-doc [module-name]
for example:
ansible-doc copy | bat --language yml
> ANSIBLE.BUILTIN.COPY (/home/maros/.local/lib/python3.8/site-packages/ansible/modules/copy.py)
The `copy' module copies a file from the local or remote
machine to a location on the remote machine. Use the
[ansible.builtin.fetch] module to copy files from remote
locations to the local box. If you need variable interpolation
in copied files, use the [ansible.builtin.template] module.
Using a variable in the `content' field will result in
unpredictable output. For Windows targets, use the
[ansible.windows.win_copy] module instead.
* note: This module has a corresponding action plugin.
OPTIONS (= is mandatory):
- attributes
The attributes the resulting file or directory should have.
To get supported flags look at the man page for `chattr' on
the target system.
This string should contain the attributes in the same order as
the one displayed by `lsattr'.
The `=' operator is assumed as default, otherwise `+' or `-'
operators need to be included in the string.
(Aliases: attr)[Default: (null)]
type: str
version_added: 2.3
version_added_collection: ansible.builtin
- backup
Create a backup file including the timestamp information so
you can get the original file back if you somehow clobbered it
incorrectly.
[Default: False]
type: bool
version_added: 0.7
version_added_collection: ansible.builtin
[ Output omitted ]
Some common ansible modules include:
- File Modules:
copy
Copy a local file to the manages hostfile
Set permissions and other properties of fileslineinfile
Ensures a particular line is or is not in a filesynchronize
Synchronizes content using rsync
- Software package modules:
package
Manages Packagesapt
Manages Packages using APTyum
Manages Packages using YUMgem
Manages Ruby packages
- System Modules
firewalld
Manages arbitrary ports and services using firewalldreboot
Reboot the machineservice
Managing servicesuser
Add, remove and manage user accounts
- Net Tools Modules
get_url
Download files over HTTP, HTTPS, or FTPnmcli
Manage networkinguri
Interact with web services and comminicate with APIs
To list all installed modules, you can use ansible-doc --list
command.
There are a handful of modules that run commands directly on the manage host. You can use these if no other module is available to do what you need. They are not idempotent you must make sure that they are safe to run twice when using them. An example of such modules are:
command
runs a single command on the system, does not use shell, does not have access to envshell
runs a command on the remote system's shell (redirection to other features work)raw
simply run a command with no processing (can be dangerous but can be useful when managing systems that cannot have Python installed (for example legacy network equipment)script
- runs a local script on a remote node after transfering itexpect
- executes a command and responds to promptstelnet
- executes a low-down and dirty telnet command
An example comparising between command
and shell
modules can be found below.
ansible prod -m command -a "free" | grep -i swap
Swap: 7340032 0 7340032
Swap: 7340032 0 7340032
Swap: 7340032 0 7340032
Swap: 7340032 0 7340032
In above example, the piping happens on control node as the command modules does not support piping.
ansible prod -m shell -a "free | grep -i swap"
[WARNING]: Found both group and host with same name: db
[WARNING]: Found both group and host with same name: lb
app2 | CHANGED | rc=0 >>
Swap: 7340032 0 7340032
db | CHANGED | rc=0 >>
Swap: 7340032 0 7340032
app1 | CHANGED | rc=0 >>
Swap: 7340032 0 7340032
lb | CHANGED | rc=0 >>
Swap: 7340032 0 7340032
Using the shell module which supports piping, you can filter ouput at target node.
As mentioned in the beginning these modules are not idempotent by design. For example when you invoke the command module with following paramaters, the initial execution will create a directory but the next one will fail.
ansible app -m command -a "mkdir /tmp/dir1"
app2 | CHANGED | rc=0 >>
app1 | CHANGED | rc=0 >>
ansible app -m command -a "mkdir /tmp/dir1"
app2 | FAILED | rc=1 >>
mkdir: cannot create directory '/tmp/dir1': File existsnon-zero return code
app1 | FAILED | rc=1 >>
mkdir: cannot create directory '/tmp/dir1': File existsnon-zero return code
To overcome this issues, you can add creates
option.
ansible app -m command -a "mkdir /tmp/dir1 creates=/tmp/dir1"
app1 | SUCCESS | rc=0 >>
skipped, since /tmp/dir1 exists
app2 | SUCCESS | rc=0 >>
skipped, since /tmp/dir1 exists
Ad Hoc refers to mode where ansible is used one time, often to test module or experiment as it does not require any significant configuration (such as playbooks). When calling a module, you often need to define mandatory variables. In example below the copy
module requires that you define source and destination path for file you want to copy.
ansible -m copy -a "src=master.gitconfig dest=~/.gitconfig" localhost
Dry run.
ansible -m copy -a "src=master.gitconfig dest=~/.gitconfig" --check localhost
Dry run with diff flag.
ansible -m copy -a "src=master.gitconfig dest=~/.gitconfig" --check --diff localhost
The following example demostrates the use of homebrew module.
ansible -m homebrew -a "name=bat state=latest" localhost
ansible -m homebrew -a "name=jq state=latest" localhost
Inventory files describe a collection of hosts or systems you want to manage using ansible commands. Hosts can be assigned to groups and groups can contain other child groups. Hosts can be members of multiple groups. Variables can be set that apply to hosts and groups. For example connection parameters, such as SSH username or port.
There are many different types of inventory files. They can be defined in various formats, for example ini, yaml. To see full list use the following ansible-doc
command.
ansible-doc -t inventory --list
It is common to define the location of inventory file within ansible.cfg
configuration file under [defaults]
sections. The below example defines an inventory folder inventory
located in same directory as ansible configuration file.
[defaults]
inventory = ./inventory
The content of this folder is as follows:
inventory
├── explicit-localhost
├── group-centos
├── group-ubuntu
├── group-vagrant
├── rhel-hosts.py
├── sles-host
├── ubuntu-centos-hosts.yml.orig
To verify if the inventory was correctly formatted and understood by ansible you can use ansible-inventory
command with options such as list
or graph
. Example below shows the output of these commands.
ansible-inventory --list
{
"_meta": {
"hostvars": {
"192.168.137.106": {
"ansible_port": 22,
"ansible_user": "vagrant"
},
"192.168.137.137": {
"ansible_port": 22,
"ansible_user": "vagrant"
},
"192.168.137.162": {
"ansible_port": 22,
"ansible_user": "vagrant"
},
"192.168.137.245": {
"ansible_port": 22,
"ansible_user": "vagrant"
}
}
},
"all": {
"children": [
"ungrouped",
"vagrant"
]
},
"centos": {
"hosts": [
"192.168.137.106",
"192.168.137.162"
]
},
"ubuntu": {
"hosts": [
"192.168.137.137",
"192.168.137.245"
]
},
"vagrant": {
"children": [
"centos",
"ubuntu"
]
}
}
ansible-inventory --graph [--vars]
@all:
|--@ungrouped:
|--@vagrant:
| |--@centos:
| | |--192.168.137.106
| | |--192.168.137.162
| |--@ubuntu:
| | |--192.168.137.137
| | |--192.168.137.245
Another way how to verify the inventory configuration is to use ansible command with list-hosts
paramter. This command also supports globbing *
. You can also specify multiple groups or hosts with comma. Indexing and negation is also supported. This is useful when you need to be usre that you are targeting the correct hosts.
ansible --list-hosts all
hosts (8):
localhost
sles40
rhel30
rhel31
centos20
centos21
ubuntu10
ubuntu11
ansible --list-hosts "ubuntu*"
hosts (2):
ubuntu10
ubuntu11
ansible --list-hosts vagrant,localhost
hosts (5):
centos20
centos21
ubuntu10
ubuntu11
localhost
#
# Note: In zsh you may need to excape [0] as \[0\] or use quotation ''
#
ansible --list-hosts all[0]
hosts (1):
ubuntu10
ansible --list-hosts \!ubuntu
hosts (6):
localhost
sles40
rhel30
rhel31
centos20
centos21
ansible --list-hosts '!ubuntu'
Connection parameters define a means how to interact with manage host. To display available connection
module plugins use the following command:
ansible-doc -t connection --list
local execute on controller
paramiko_ssh Run tasks via python ssh (paramiko)
psrp Run tasks over Microsoft PowerShell Remoting Protocol
ssh connect via ssh client binary
winrm Run tasks over Microsoft's WinRM
By default, ssg connection protocol is leveraged when connecting to linux hosts. By using collections and roles it is possible to expand the dafualt list of connection plugins.
In order to conduct a simple reachability test for hosts defined in inventory you can use Ansible ad-hoc command with ping
module. Below I am running this module agains vagrant
host group.
ansible -m ping ubuntu
192.168.137.137 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
192.168.137.245 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
Another way is to leverage command
module and execute a command on manage host. In this case below, git is not installed on hosts that are part of centos group.
ansible -m command -a "git config --global --list" centos
192.168.137.106 | FAILED | rc=2 >>
[Errno 2] No such file or directory: b'git': b'git'
192.168.137.162 | FAILED | rc=2 >>
[Errno 2] No such file or directory: b'git': b'git'
Playbook is a YAML-based text file which list one or more plays in specific order. A play is an ordered list of tasks run against a specific hosts within an inventory.
Each task runs a module that performs some simple action on or for the manage host. Most tasks are idempotent and can be safely run a second time without problems.
The the example below, we are executing single task using copy
module on localhost
.
---
- name: Description of first play
- hosts: localhost
tasks:
- name: Description of first task
- copy: src="master.gitconfig" dest="~/.gitconfig"
The playbook below uses a different format, but results in same end state.
---
- hosts: localhost
tasks:
- copy:
src: "master.gitconfig"
dest: "~/.gitconfig"
You can use -C
option to perform a dry run of the playbook execution. This causes Ansible to report what changes would have occurred if the playbook were executed, but does not make any actual changes to managed hosts.
ansible-playbook -C playbook.yml
ansible-playbook playbooks/playbook.yml
PLAY [Ensure git installed] *****************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************
ok: [192.168.137.106]
ok: [192.168.137.162]
TASK [package] ******************************************************************************************************************************************
ok: [192.168.137.162]
ok: [192.168.137.106]
PLAY [Ensure ~/.gitconfig copied from master.gitconfig] *************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************
ok: [192.168.137.106]
ok: [192.168.137.162]
ok: [192.168.137.137]
ok: [192.168.137.245]
TASK [first show no config in targets] ******************************************************************************************************************
fatal: [192.168.137.137]: FAILED! => {"changed": true, "cmd": ["git", "config", "--global", "--list"], "delta": "0:00:00.002296", "end": "2021-02-02 14:57:03.018818", "msg": "non-zero return code", "rc": 128, "start": "2021-02-02 14:57:03.016522", "stderr": "fatal: unable to read config file '/home/vagrant/.gitconfig': No such file or directory", "stderr_lines": ["fatal: unable to read config file '/home/vagrant/.gitconfig': No such file or directory"], "stdout": "", "stdout_lines": []}
[Output omitted]
Variables increase the code reusability by decoupling dynamic values that are unique for given project. This simplifies the creation and maitenance of code and reduces number of erros.
Variables can contain items like:
- Unique Users to create, modify or delete
- Unique Software to install and uninstall
- Unique Services to start, stop and restart
- Unique Credentials to manage
Variables must start with a letter, and they can only contain letters, numbers and underscores. An example of valida variables include:
web_server
remote_file
file1
file_1
remote_server1
remote_server_1
There are three avaiable scopes (or reaches) where a variable exists:
- Global
- The value is set for all hosts
- Example: extra variables you set in the job template
- Host
- The value is set for a particular host (or group)
- Examples: variables set for a host in the inventory or host_vars directory, gathered facts
- Play
- The value is set for all hosts in the context of the current play.
- Examples: vars directives in a play, include_vars tasks and so on
There are few rules that define order of operations for variables:
- If variable is defined at more than one level, the level with the highest precedence wins.
- A narrow scope generally takes precedence over a wider scope.
- Variables that you define in an inventory are overridden by variables that you define in the playbook.
- Variables defined in a playbook are overridden by "extra variables" defined on the command line with the
-e
option.
Variables can be defined in multuple ways. Once common method is to place a variable in vars block at the beginning of a play:
- hosts: all
vars:
user_name: joe
user_state: present
It is also possible to define play variables in external files. Use var_files
at the start of the play to load variables from a list of files into the play:
- hosts: all
vars_files:
- vars/users.yml
After declaring variables, you can use them in tasks. Reference a variable by placing the variable name in double braces: {{ variable_name }}
. Ansible substitutes the variable with its value when it runs the task.
When you reference one variable as another variable's value, and the curly braces start the value, you must use quotes around teh value. For example name: "{{ user_name }}
- name: Example play
hosts: all
vars:
user_name: joe
tasks:
# This line wil read: Creates the user joe
- name: Creates the user {{ user_name }}
user:
# This line will create the user named joe
name: "{{ user_name }}"
state: present
Host variables applly to a specific host, whereas Group variables apply to all hosts in a host group or iin a group of host groups.
Host variables take precedence over group variables, but variables defined inside a play take precedence over both.
Host variables and group variables can be defined:
- In the inventory itself
- In
host_vars
andgroup_vars
directories in the same directory as the inventory - In
host_vars
andgroup_vars
directories in the same directory as the playbook. These are host and group based but have higher precedence than inventory variables.
There are cases where you need to store sensitive data such as passwords, API keys and other secrets. These secrets are passed to Ansible thorugh variables.
Ansible Vault provides a way to encrypt and decrypt files used by playbooks. The ansible-vault
command is used to to manage these files.
The syntax of this command is ansible-vault [ create | view | edit ] <filename>
If the file already exists, you can encrypt it with ansible-vault encrypt <filename>
. Optionally you can save the encrypted file with a new name using --output=new_filename
option.
To decrypt a file use ansible-vault decrypt <filename>
.
When using playbook you with file encrypted by vault, you need to povide vault password using the --vault-id
option. For example
ansible-playbook --ask-vault-pass <playbook>
The @prompt
option will prompt user for the Ansible Vault password.
In same cases you need to use multiple passwords for different files. In such case you need to set labels during file encryption for example.
# Encrypt files using labels
ansible-vault encrypt <gvars_filename> --vault-id gvars@prompt
ansible-vault encrypt <lvars_filename> --vault-id lvars@prompt
# Specify the labels during playbook invocation
ansible-playbook --vault-id gvars@prompt --vault-id lvars@prompt playbook.yml
If you need to change a password on an encrypted file. You can use the ansible-vault rekey <filename>
option.
Ansible role is a folder that containes tasks, files, tempaltes, handlers, variables and playbooks to achieve desired state.
For example, a base role could include shared system packages and configuration which can be applied to all targets. A service specific role (web, app, db) can be applied to only selected ones.
By using variables and encapsulation greatly increases reausability and scalability.
To create a new role skeleton, you can leverage ansible-galaxy
.
ansible-galaxy init control
- Role control was created successfully
Ansible Galaxy privides a platform for distributing high level constructs that can be reused amoungs ansible users.
ansible-galaxy role info geerlingguy.docker | bat -l yml
ansible-galaxy role install geerlingguy.docker
ansible-galaxy collection install -r requirements.yml
ansible-galaxy collection list
# /home/maros/.ansible/collections/ansible_collections
Collection Version
----------------- -------
community.docker 1.2.1
community.general 2.0.0
From documentation, ansible-console
is a REPL that allows for running ad-hoc tasks against a chosen inventory (based on dominis’ ansible-shell).
ansible-console [intentory] --module-path=~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules:~/.ansible/collections/ansible_collections/community/general/plugins/modules
ansible_container_test2 | SUCCESS => {
"changed": false,
"ping": "pong"
}
ansible_container_test3 | SUCCESS => {
"changed": false,
"ping": "pong"
}
ansible_container_test1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
maros@containers (3)[f:5]$ git_config list_all=yes scope=global
ansible_container_test3 | SUCCESS => {
"changed": false,
"config_values": {
"user.email": "maros.kukan@gmail.com",
"user.name": "Maros"
},
"msg": ""
}
ansible_container_test1 | SUCCESS => {
"changed": false,
"config_values": {
"user.email": "maros.kukan@gmail.com",
"user.name": "Maros"
},
"msg": ""
}
ansible_container_test2 | SUCCESS => {
"changed": false,
"config_values": {
"user.email": "maros.kukan@gmail.com",
"user.name": "Maros"
},
"msg": ""
}
You can verify the changes by running bash on a sample container.
docker container exec -it ansible_container_test1 bash
root@19e8d86a26b1:/# git config --global --list
user.email=maros.kukan@gmail.com
user.name=Maros
A decentralized mode of operations, where self-manage nodes have scheduled job to pull playbookf from central VCS and execute it using local ansible installation.
Full documentation on this feature can be found here
Ansible playbook execution can be optimized number of ways. In order to get a baseline measure your current execution time.
Note: you can located these playbooks in examples/class-mastering-ansible
time ansible-playbook site.yml
...
[Output omitted for brevity]
...
15.24s user 3.77s system 37% cpu 50.348 total
time ansible-playbook stack_status.yml
...
[Output omitted for brevity]
...
6.83s user 1.72s system 53% cpu 16.042 total
One of the ways to decrease execution time is to disable facts gathering when it is not used.
gather_facts: no
Depending on module that is being used, an optimization step can be introduced at this level. For example instead of updating apt cache for each role or play, you can do it in the begining and set cache timeout like in example below.
---
- hosts: all
become: yes
gather_facts: no
tasks:
- name: update apt cache
ansible.builtin.apt: update_cache=yes cache_valid_time=86400
- include: control.yml
- include: database.yml
- include: webserver.yml
- include: loadbalancer.yml
If need to target only particular host or group instead of the ones defined in playbook, you can use the --limit
or -l
argument.
ansible-playbook site.yml -l app01
Tags can be used to selectively run particular tasks or set of tasks.
Start by defining a tag for particular task inside playbook.
---
- name: install tools
ansible.builtin.apt: name="{{ item }}" state=present
with_items:
- curl
tags: ['packages']
To list available tasks in playbook(s) use the --list-tags
argument.
ansible-playbook site.yml --list-tags
playbook: site.yml
play #1 (all): all TAGS: []
TASK TAGS: [packages]
play #2 (control): control TAGS: []
TASK TAGS: [packages]
play #3 (database): database TAGS: []
TASK TAGS: [configure, packages, service]
play #4 (webserver): webserver TAGS: []
TASK TAGS: [configure, packages, service, system]
play #5 (loadbalancer): loadbalancer TAGS: []
TASK TAGS: [configure, packages, service]
To run this tagged task(s).
ansible-playbook site.yml --tags "packages"
To run all tasks except the one with tag.
time ansible-playbook site.yml --skip-tags "packages"
...
[Output omitted for brevity]
...
11.33s user 2.77s system 49% cpu 28.544 total
Pipelining reduces the number of operations that SSH needs to perform during connection setup. By default it is disabled but can be overided in ansible.cfg
. There are some system prerequisites though.
...
[ssh_connection]
pipelining = True
When you initial write a playbook, you likely start by installing packages and ensuring that the service is started.
However, as you add more service configuration it is required to reconsider placement of initial tasks such as service start close to end of the playbook, so the changes to configuration files are picked up.
When you troubleshoot a specific tasks it is feasible to focus just on that particular section. You could comment out the rest of the playbook or take advantage of list-tasks
and start-at-task
argument.
ansible-playbook site.yml --list-tasks
ap site.yml --list-tasks
playbook: site.yml
play #1 (all): all TAGS: []
tasks:
update apt cache TAGS: [packages]
play #2 (control): control TAGS: []
tasks:
control : install tools TAGS: [packages]
play #3 (database): database TAGS: []
tasks:
mysql : install tools TAGS: [packages]
mysql : install mysql-server TAGS: [packages]
mysql : ensure mysql listening on eth0 port TAGS: [configure]
mysql : ensure mysql started TAGS: [service]
mysql : create database TAGS: [configure]
mysql : create demo user TAGS: [configure]
play #4 (webserver): webserver TAGS: []
tasks:
apache2 : install web components TAGS: [packages]
apache2 : ensure mod_wsgi enabled TAGS: [configure]
apache2 : de-activate default apache site TAGS: [configure]
apache2 : ensure apache2 started TAGS: [service]
demo_app : install web components TAGS: [packages]
demo_app : copy demo app source TAGS: [configure]
demo_app : copy demo.wsgi TAGS: [configure]
demo_app : copy apache virtual host config TAGS: [configure]
demo_app : setup python virtualenv TAGS: [system]
demo_app : activate demo apache site TAGS: [configure]
play #5 (loadbalancer): loadbalancer TAGS: []
tasks:
nginx : install nginx TAGS: [packages]
nginx : configure nginx sites TAGS: [configure]
nginx : get active sites TAGS: [configure]
nginx : de-activate sites TAGS: [configure]
nginx : activate sites TAGS: [configure]
nginx : ensure nginx started TAGS: [service]
ansible-playbook site.yml --start-at-task "copy demo app source"
You can also use --step
argument to go over each task of the play answering whether you want to run it or not run it.
ansible-playbook site.yml --step
PLAY [all] *********************************************************************
Perform task: TASK: update apt cache (N)o/(y)es/(c)ontinue: Y
When a host is unreachable during playbook execution, it is possible to retry the play. Ansible will creare a *.retry
file that will contains affect hosts.
ansible-playbook site.yml --limit @/home/ansible.site.retry
Static syntax analysis is available using --syntax-check
argument.
ansible-playbook --syntax-check site.yml
ansible-playbook --check site.yml
The debug module can be used to display data at transient state.
- ansible.builtin.debug: var=active.stdout_lines
- ansible.builtin.debug: var=vars
The execute the playbook as usual.
ansible-playbook site.yml --limit lb01 --start-at-task "get active sites"
[Output omitted for brevity]
...
TASK [nginx : ansible.builtin.debug] ************************************************************************
ok: [lb01] => {
"active.stdout_lines": [
"myapp"
]
}
...
Add this to your shell rc file, e.g. ~/.zshrc
or if you use oh-my-zsh framework edit the ~/.oh-my-zsh/custom/aliases.zsh
file.
# Ansible aliases
alias ap='ansible-playbook'
alias acl="ansible-config list"
alias ail="ansible-inventory --list"
alias aig="ansible-inventory --graph"
Once new aliases are loaded simple source the modified file source ~/.zshrc
and you are ready to go.
Facts are useful when you need to gather information about particular target which can be reaused in later steps of the playbook.
Gathering Facts about localhost
ansible -m setup localhost
Pretty printed module documentation
ansible-doc copy | bat --language yml
If you are using Vagrant with machines that have IP address assigned dynamicaly through DHCP, you may want to generate inventory file from vagrant ssh-config
. Good tool to leverage is Vagrant-to-ansible-inventory project.
I recommend creating a Python virtual environment and install the required package using pip before running the tool.
If private keys are not explicitly defined within hosts file they need to be loaded before Ansible can connect to machines provisioned by Vagrant.
for IdentityFile in $(vagrant ssh-config | grep IdentityFile | cut -d" " -f4)
do
ssh-add ${IdentityFile}
done
Any application deployment can be broken down into four pillars or stages.
- Software Packages - Code required to run the software. Can come from software package repositories, (apt, yum, pip) as well as version control systems (git)
- Service Handlers - Such as scripts, init.d, systemd, they may be already included with software package
- System Configuration - Such as user permissions, firewall rules and any state that is required
- Software Configuration - Such as appication configuration and content files.
When it comes to organizing files inside a project, you have number of options. Some of them are desribed in Ansible Documetation.
Below you can find a sample structure that separates environments, vars, roles, playbooks and configuration.
├── ansible.cfg
├── site.yml
├── lb.yml
├── app.yml
├── db.yml
├── systems.yml
├── update.yml
├── environments
│ ├── dev
│ └── prod
├── group_vars
│ ├── all
│ │ ├── vars
│ │ └── vault
│ ├── dev
│ └── prod
└── roles
├── myapp
├── apache2
└── mysql
├── defaults
├── files
├── handlers
├── meta
├── tasks
├── templates
├── tests
└── vars