Ansible playbooks supporting the deployment of YDB clusters into VM or baremetal servers.
Currently, the playbooks support the following scenarios:
- the initial deployment of YDB static (storage) nodes;
- YDB database creation;
- the initial deployment of YDB dynamic (database) nodes;
- adding extra YDB dynamic nodes to the YDB cluster;
- updating cluster configuration file and TLS certificates, with automatic rolling restart.
The following scenarios are yet to be implemented (TODO):
- configuring extra storage devices within the existing YDB static nodes;
- adding extra YDB static nodes to the existing cluster;
- removing YDB dynamic nodes from the existing cluster.
Current limitations:
- configuration file customization depends on the support of automatic actor system threads management, which requires YDB version 23.1.26.hotfix1 or later;
- the cluster configuration file has to be manually created;
- there are no examples for configuring the storage nodes with different disk layouts (it seems to be doable by defining different
ydb_disks
values for different host groups).
Playbooks were specifically tested on the following Linux flavours:
- Ubuntu 22.04 LTS
- AlmaLinux 8
- AlmaLinux 9
- AstraLinux Special Edition 1.7
- REDOS 7.3
Default configuration settings are defined in the group_vars/all
file as a set of Ansible variables. An example file is provided. Different playbook executions may require different variable values, which can be accomplished by specifying extra JSON-format files and passing those files in the command line.
The meaning and format of the variables used are specified in the table below.
Variable | Meaning |
---|---|
ansible_python_interpreter |
The correct path to the Python interpreter on the YDB cluster hosts. |
ydb_dir |
Path of the YDB software installation directory to be created on the hosts. |
libidn_archive |
Enable the installation of custom-built libidn for RHEL, AlmaLinux or Rocky Linux. |
ydb_archive |
YDB server binary package in .tar.gz format |
ydb_unpack_options |
Extra flags to be passed to tar for unpacking the YDB server binaries, typically should contain the value --strip-component=1 |
ydb_tls_dir |
Path to the local directory with the TLS certificates and keys, as generated by the sample script, or following the filename convention used by the sample script |
ydb_config |
The name of the cluster configuration file within the files subdirectory (without the actor_system_config snippet!) |
ydb_domain |
The name of the root domain hosting the databases, value Root is used in the YDB documentation |
ydb_disks |
Disk layout of storage nodes, defined as ydbd_static in the hosts file. Defined as list of structures having the following fields:name - physical device name (like /dev/sdb or /dev/vdb );label - the desired YDB data partition label, as used in the cluster configuration file (like ydb_disk_1 ) |
ydb_dynnodes |
Set of dynamic nodes to be ran on each host listed as ydbd_dynamic in the hosts file. Defined as list of structures having the following fields:dbname - name of the YDB database handled by the corresponding dynamic node;instance - dynamic node service instance name, allowing to distinguish between multiple dynamic nodes for the same database running in the same host;offset - integer number 0-N , used as the offset for the standard network port numbers (0 means using the standard ports). |
ydb_brokers |
List of host names running the YDB static nodes, exactly 3 (three) host names must be specified |
ydb_cores_static |
Number of cores to be used by thread pools of the static nodes |
ydb_cores_dynamic |
Number of cores to be used by thread pools of the dynamic nodes |
ydb_dbname |
Database name, for database creation, dynamic nodes deployment and dynamic nodes rolling restart |
ydb_pool_kind |
YDB default storage pool kind, as specified in the static nodes configuration file in the storage_pool_types.kind field |
ydb_default_groups |
Initial number of storage groups in the newly created database |
dynnode_restart_sleep_seconds |
Number of seconds to sleep after startup of each dynamic node during the rolling restart. |
Overall installation is performed according to the official instruction, with several steps automated with Ansible. The steps below are adopted for the Ansible-based process:
- Review the system requirements, and prepare the YDB hosts. Ensure that SSH access and sudo-based root privileges are available.
- Prepare the TLS certificates, the provided sample script may be used for automation of this step.
- Download the YDB server distribution. It is better to use the latest binary version available.
- Clone the Github repository containing the YDB Ansible playbooks:
git clone https://github.com/ydb-platform/ydb-ansible cd ydb-ansible
- Prepare the list of hosts to deploy the YDB static and dynamic nodes, as sections
[ydbd_static]
and[ydbd_dynamic]
in thehosts
file. An example file is provided. - Prepare the cluster configuration file according to the instructions in the documentation, and save it to the
files
subdirectory. Omit theactor_system_config
section - it will be added automatically. - Copy the
group_vars/all.example
file intogroup_vars/all
, and customize it according to your environment. - Copy the
files/secret.example
file tofiles/secret
, and customize the desired initial administrative password, leaving the usernameroot
unchanged. Ansible Vault can be configured to protect this sensitive file (TODO document actions). - Deploy the static nodes and initialize the cluster by running the
run-install-static.sh
script. Ensure that the playbook has been completed successfully, and diagnose and fix execution errors if they happen. - Create at least one database according to the documentation. Multiple databases may run on a single cluster, each requiring the YDB dynamic node services to handle the requests. To create the database using the Ansible playbook, use the
run-create-database.sh
script. Use theydb_dbname
andydb_default_groups
variables to configure the desired database name and the initial number of storage groups in the new database. - Deploy the dynamic nodes running the
run-install-dynamic.sh
script. Ensure that the playbook has been completed successfully, and diagnose and fix execution errors if they happen. - Repeat steps 10-11 as necessary to create more databases, or step 11 to deploy more YDB dynamic nodes.
To update the YDB cluster configuration files (ydbd-config.yaml
, TLS certificates and keys) using the Ansible playbook, the following actions are necessary:
- Ensure that the
hosts
file contains the current list of YDB cluster nodes, both static and dynamic. - Ensure that the configuration variable
ydbd_config
in thegroup_vars/all
file points to the desired YDB server configuration file. - Ensure that the configuration variable
ydbd_tls_dir
points to the directory containing the desired TLS key and certificate files for all the nodes within the YDB cluster. - Apply the updated configuration to the cluster by running the
run-update-config.sh
script. Ensure that the playbook has been completed successfully, and diagnose and fix execution errors if they happen.
Notes:
- Please take into account that rolling restart is performed node by node, and for a large cluster the process may consume a significant amount of time.
- For Certificate Authority (CA) certificate rotation, at least two separate configuration updates are needed:
- first to deploy the ca.crt file, containing both new and old CA certificates;
- second to deploy the fresh server keys and certificates signed by the new CA certificate.
- libaio or libaio1 is installed, depending on the operating system
- chrony is installed and enabled to ensure time synchronization
- jq is installed to support some scripting logic used in the playbooks
- YDB user group and user is created
- YDB installation directory is created
- YDB server software binary package is unpacked into the YDB installation directory
- YDB client package automatic update checks are disabled for the YDB user, to avoid extra messages from client commands.
- YDB TLS certificates and keys are copied to each server
- YDB cluster configuration file is copied to each server
- Transparent huge pages (THP) are enabled on each server, which is implemented by the creation, activation and start of the corresponding systemd service.
- Installation actions are executed.
- For each disk configured, it is checked for the existing YDB data. If none found, disk is completely re-partitioned, and obliterated. For the existing YDB data, no changes are made.
WARNING: the safety checks do not work for YDB disks using non-default encryption keys. DATA LOSS IS POSSIBLE if the encryption is actually used. Probably an enhancement is needed to support the encryption key to be specified in the deployment option.
ydbd-storage.service
is created and configured as the systemd service.ydbd-storage.service
is started, and the playbook waits for static nodes to come up.- YDB blobstorage configuration is applied with the
ydbd admin blobstorage init
command. - The playbook waits for the completion of YDB storage initialization.
- The initial password for the
root
user is configured according to contents of thefiles/secret
file.
- Installation actions are executed.
- For each database configured, the list of YDB dynnode systemd services are created and configured.
- YDB dynnode services are started.
- YDB TLS certificates and keys are copied to each server.
- YDB cluster configuration file is copied to each server.
- Rolling restart is performed for YDB storage nodes, node by node, checking for the YDB storage cluster to become healthy after the restart of each node.
- Rolling restart is performed for YDB database nodes, server by server, restarting all nodes sitting in the single server at a time, and waiting for the specified number of seconds after each server's nodes restart.