- Consul v1.10+
- Terraform v1.0+
- Terraform Cloud
- Consul Terraform Sync
Check out the AWS ALB Listener Rule Terraform module, which is use by Consul Terraform Sync configuration.
This repository uses Terraform Cloud to store infrastructure state.
To set it up:
- Fork this repository.
- Create 4 workspaces.
datacenter
k8s-cloud
consul
application
- Connect the workspace with VCS workflow to your fork and set their working directories.
datacenter
: working directory isdatacenter
k8s-cloud
: working directory iscloud
consul
: working directory isconsul
application
: working directory isapplication
- Add AWS credentials as sensitive environment variables to each workspace.
- Define two variables in
datacenter
:client_ip_address
:<insert your public ip>/32
enable_peering
:false
-
In each directory, you'll find a
terraform.auto.tfvars
. -
By default, we set the following regions. You can change these, but you must change them across all files.
datacenter
(VM):us-east-1
cloud
(Kubernetes):us-west-2
-
Set up Consul security configurations, including gossip encryption, certificates, and ACLs.
- Run
make consul_certs
to create certificates for thedatacenter
Consul server. - Run
make consul_secrets
to generate the Terraform variables for gossip encryption and certificates. - Copy the variables from
datacenter/secrets.tfvars
into thedatacenter
workspace, marking as sensitive.
- Run
-
Start a new run and apply changes to the
datacenter
workspace. -
Bootstrap Consul ACLs by running
make consul_acl_bootstrap
. This will save the root management token inconsul_acl_bootstrap.json
. -
Start a new run and apply changes to the
cloud
workspace. -
Go into
datacenter
workspace.- Update the variable to set
enable_peering = true
. This sets up VPC peering betweencloud
anddatacenter
environments. - Start a new run and apply changes to the
datacenter
workspace.
- Update the variable to set
-
Start a new run for the
consul
workspace. -
Start a new run for the
application
workspace.
-
Generate variables for Consul Terraform Sync to use in its module and save them in
consul_terraform_sync/datacenter.modules.tfvars
. This includes an ALB, listener rule, and target group created by thedatacenter
Terraform configuration. It also updatesconfig.local.hcl
with the Consul UI load balancer endpoint.make cts_variables
-
Run Consul Terraform Sync.
make cts
-
To verify everything is working, get the load balancer's DNS and issue an HTTP GET request with the
Host
header set tomy-application.my-company.net
. The request should go todatacenter
.$ make test { "name": "my-application (datacenter)", "uri": "/", "type": "HTTP", "ip_addresses": [ "172.25.16.8" ], "start_time": "2021-08-26T16:43:12.552603", "end_time": "2021-08-26T16:43:12.552681", "duration": "77.835µs", "body": "my-application (datacenter)", "code": 200 }
-
You can update the deployment to send a percentage of traffic to the
cloud
instances of `my-application.$ kubectl edit deployment my-application # update annotation - consul.hashicorp.com/service-meta-weight: "50"
-
CTS will pick up the change from the service metadata and update the ALB listener to send 50% of traffic to
cloud
.
-
Clean up CTS and the Consul deployment to Kubernetes.
make clean
-
Go into Terraform Cloud and queue a destroy in the following order.
application
consul
k8s-cloud
datacenter
-
In this demo, the "cloud" application is hosted on Kubernetes (for ease of deployment).
-
The ALB mimics a datacenter load balancer.
-
The configuration peers two VPCs in two different regions.
-
You would ideally configure your Kubernetes pod with an AWS IAM role for configuring a load balancer. To abstract away as many AWS constructs as possible, this demo passes the credentials to CTS directly to mimic the passing of any provider credentials.
-
Consul Terraform Sync is deployed to Kubernetes so that the daemon continuously runs. It uses a Docker image built by
canary/Dockerfile
.