This package contains a Terraform module to create a Consul cluster in Joyent Triton Cloud.
module/
contains the Terraform Consul moduleexample/
contains the example Terraform configuration to launch the Consul cluster.
-
cd example/
-
cp terraform.tfvars.example terraform.tfvars
-
Update
terraform.tfvars
to match your Triton environment.triton_url
-- Runtriton profile get
and get the value of the field, "url".triton_region
-- Runtriton profile get
and get the value of the field, "name".triton_account_name
-- Runtriton profile get
and get the value of the field, "account".triton_account_uuid
-- Runtriton account get
and get the value of the field, "id".consul_name
-- This will become the prefix of the Triton instances of Consul cluster. For example, ifconsul_name
is "foo", then your instance names will be "foo-0", "foo-1", and so on. Also,examples/main.tf
will use this to generate proper Triton CNS name.
-
Update module variable in
main.tf
:instances
-- number of Triton instances for the Consul clusterexpect
-- Consul will not start until the number of instances are equal or more than this value.networks
-- Arrays of Triton network that Consul instances will join. Note that Consul will refuse to start if the instance does not have a private IP. The order of network matters! See belowinterface
description.interface
-- The NIC interface that Consul will advertise itself. If you have more than one network innetworks
, you need to specify the interface name accordingly. On LX brand zone, the first network innetworks
will be assigned toeth0
, and so on. On joyent(SmartOS) brand zone, the first network innetworks
will be assigned tonet0
.bastion_host
-- Set the IP address of the Bastion server ifnetworks
does not have public network.domain_name
-- CNS name for this Consul cluster. This module relies on Triton CNS to discover initial Consul server participants. This should matches the CNS domain name that Triton will generate for Consul instances.private_key
-- Private key for the public key authentication for connecting instances.
-
Run
terraform get && terraform init && terraform plan
to see the execution plan. -
Run
terraform apply
to deploy the Consul cluster. -
Run
terraform destroy
if you want to delete the cluster.
On your Consul instances, these files are available for the inspections.
/usr/local/bin/consul
-- Consul binary/var/local/consul.conf.json
-- the consul configuration file/var/run/consul.pid
-- the pidfile of consul/var/log/consul.log
-- the log file of consul
This module ran two helper scripts to launch the consul servers; (1) consul-launcher.sh
and (2) consul-reconfig.sh
.
consul-launcher.sh
will start the Consul server and if the server failes, it will restart the consul server. In the initial phase, it will wait until it detects enough number of Triton instances are ready, by looking at the number of CNS A records that you specified in domain_name
and expect
configuration value, and will launch the server. Here are some files related to this script:
/var/local/consul-launcher.sh
-- this script itself./var/run/consul-launcher.pid
-- the pidfile of this process/var/log/consul.launcher.log
-- the log file of this process
consul-reconfig.sh
will check the CNS A records of the consul cluster periodically, and generate new consul configuration. If the new configuration differs from the existing configuration, it will replace the configuration to the new one, and will notfiy the consul server by sending SIGHUP
signal. Also, if the number of A records are less than expect
configuration value, it will do nothing.
/var/local/consul-reconfig.sh
-- this script itself./var/run/consul-reconfig.pid
-- the pidfile of this process/var/log/consul.reconfig.log
-- the log file of this process