This is a demo of using Boundary, Consul, and Vault to secure an application on Kubernetes.
Boundary controls user access to databases and test endpoints. Consul secures service-to-service communication. Vault secures the Consul cluster and issues temporary credentials for an application to access a database
Each folder contains a few different configurations.
-
Terraform Configurations
-
infrastructure/
: All the infrastructure to run the system.- VPC (3 private subnets, 3 public subnets)
- Boundary cluster (controllers, workers, and AWS RDS PostgreSQL database)
- AWS Elastic Kubernetes Service cluster
- AWS RDS (PostgreSQL) database for demo application
- HashiCorp Virtual Network (peered to VPC)
- HCP Consul
- HCP Vault
-
boundary
: Configures Boundary with two projects, one for operations and the other for development teams. -
datadog/setup/
: Deploy Datadog agents to Kubernetes cluster -
vault/setup/
: Deploy a Vault cluster via Helm chart and set up Kubernetes auth method -
certs/
: Sets up offline root CA and signs intermediate CA in Vault for Consul-related certificates. Only applies if you setuse_hcp_consul = false
to deploy Consul on Kubernetes. -
vault/consul/
: Set up Consul-related secrets engines. -
consul/setup/
: Deploys a Consul cluster via Helm chart. For demonstration of Vault as a secrets backend, deploys Consul servers + clients. -
consul/config/
: Sets up external service to database. -
vault/app/
: Set up secrets engines for applications. Archived in favor ofconsul/cts/
.
-
-
Other
-
consul/cts/
: Deploys CTS to Kubernetes for setting up Vault database secrets based on database service's address -
application/hashicups
: Deploys the HashiCorp Demo Application (AKA HashiCups) to Kubernetes -
application/expense-report
: Deploys fake service with Datadog tracing and metrics -
database/
: Configures HashiCorp Demo Application database
-
- Terraform 1.3
- Consul 1.12 (on Kubernetes)
- HashiCorp Cloud Platform (HCP) Vault 1.11
- HashiCorp Cloud Platform (HCP) Consul 1.13
- Boundary 0.11
- Terraform Cloud
- AWS Account
- Create an AWS EC2 keypair.
- HashiCorp Cloud Platform account
- You need access to HCP Consul and Vault.
- Create a service principal for the HCP Terraform provider.
jq
installed- Fork this repository.
Note: When you run this, you might get the error
Provider produced inconsistent final plan
. This is because we're usingdefault_tags
. Re-run the plan and apply to resolve the error.
First, set up the Terraform workspace.
- Create a new Terraform workspace.
- Choose "Version control workflow".
- Connect to GitHub.
- Choose your fork of this repository.
- Name the workspace
infrastructure
. - Select the "Advanced Options" dropdown.
- Use the working directory
infrastructure
. - Select "Create workspace".
Next, configure the workspace's variables.
-
Variables should include:
client_cidr_block
(sensitive): list including the public IP address of your machine, in [00.00.00.00/32
] form. You get it by runningcurl ifconfig.me
in your terminal.datadog_api_key
(sensitive): API Key to send Boundary and HCP Vault logs and metrics to Datadog
-
Environment Variables should include:
HCP_CLIENT_ID
: HCP service principal IDHCP_CLIENT_SECRET
(sensitive): HCP service principal secretAWS_ACCESS_KEY_ID
: AWS access key IDAWS_SECRET_ACCESS_KEY
(sensitive): AWS secret access keyAWS_SESSION_TOKEN
(sensitive): If applicable, the token for session
If you have additional variables you want to customize, including region, make sure to update them in
the infrastructure/terraform.auto.tfvars
file.
Finally, start a new plan and apply it. It can take more than 15 minutes to provision!
First, set up the Terraform workspace.
- Create a new Terraform workspace.
- Choose "Version control workflow".
- Connect to GitHub.
- Choose your fork of this repository.
- Name the workspace
boundary
. - Select the "Advanced Options" dropdown.
- Use the working directory
boundary
. - Select "Create workspace".
Next, configure the workspace's variables. This Terraform configuration
retrieves a set of variables using terraform_remote_state
data source.
-
Variables should include:
tfc_organization
: your Terraform Cloud organization name
-
Environment Variables should include:
AWS_ACCESS_KEY_ID
: AWS access key IDAWS_SECRET_ACCESS_KEY
(sensitive): AWS secret access keyAWS_SESSION_TOKEN
(sensitive): If applicable, the token for session
Queue to plan and apply. This creates an organization with two scopes:
core_infra
, which allows you to SSH into EKS nodesproduct_infra
, which allows you to access the PostgreSQL database
Only product
users will be able to access product_infra
.
operations
users will be able to access both core_infra
and product_infra
.
For logging and other metrics, deploy a Datadog agent to the Kubernetes cluster.
Set up the Terraform workspace.
- Create a new Terraform workspace.
- Choose "Version control workflow".
- Connect to GitHub.
- Choose your fork of this repository.
- Name the workspace
datadog-setup
. - Select the "Advanced Options" dropdown.
- Use the working directory
datadog/setup
. - Select "Create workspace".
Next, configure the workspace's variables. This Terraform configuration
retrieves a set of variables using terraform_remote_state
data source.
-
Variables should include:
tfc_organization
: your Terraform Cloud organization namedatadog_api_key
(sensitive): API Key to send Boundary and HCP Vault logs and metrics to Datadog
-
Environment Variables should include:
AWS_ACCESS_KEY_ID
: AWS access key IDAWS_SECRET_ACCESS_KEY
(sensitive): AWS secret access keyAWS_SESSION_TOKEN
(sensitive): If applicable, the token for session
First, set up the Terraform workspace.
- Create a new Terraform workspace.
- Choose "Version control workflow".
- Connect to GitHub.
- Choose your fork of this repository.
- Name the workspace
vault-setup
. - Select the "Advanced Options" dropdown.
- Use the working directory
vault/setup
. - Select "Create workspace".
Next, configure the workspace's variables. This Terraform configuration
retrieves a set of variables using terraform_remote_state
data source.
-
Variables should include:
tfc_organization
: your Terraform Cloud organization name
-
Environment Variables should include:
HCP_CLIENT_ID
: HCP service principal IDHCP_CLIENT_SECRET
(sensitive): HCP service principal secretAWS_ACCESS_KEY_ID
: AWS access key IDAWS_SECRET_ACCESS_KEY
(sensitive): AWS secret access keyAWS_SESSION_TOKEN
(sensitive): If applicable, the token for session
Terraform will set up Kubernetes authentication method and deploy the Vault Helm chart to the cluster.
As a best practice, store root CAs away from Vault. To demonstrate this, we generate a root CA offline. We use three separate root CAs:
-
Cluster Root CA
- Level 1 Intermediate CA (server root)
- Level 2 Intermediate CA (server intermediate)
-
Service Mesh Root CA for mTLS: This requires three levels because we will need to reconfigure the CA for the correct SPIFFE URI.
- Level 1 Intermediate CA
- Level 2 Intermediate CA (service mesh root)
- Level 3 Intermediate CA (service mesh intermediate)
-
API Gateway Root CA
- Level 1 Intermediate CA (gateway root)
- Level 2 Intermediate CA (gateway intermediate)
NOTE: This is a local Terraform command in order to secure the offline root CA.
Run the command to create a root CA as well as the intermediate CAs, and store the intermediate CAs in Vault.
make configure-certs
First, set up the Terraform workspace.
- Create a new Terraform workspace.
- Choose "Version control workflow".
- Connect to GitHub.
- Choose your fork of this repository.
- Name the workspace
vault-consul
. - Select the "Advanced Options" dropdown.
- Use the working directory
vault/consul
. - Select "Create workspace".
Next, configure the workspace's variables. This Terraform configuration
retrieves a set of variables using terraform_remote_state
data source.
-
Variables should include:
tfc_organization
: your Terraform Cloud organization name`
-
Environment Variables should include:
HCP_CLIENT_ID
: HCP service principal IDHCP_CLIENT_SECRET
(sensitive): HCP service principal secret
Terraform will set up the PKI secrets engine for TLS in the Consul cluster (not the service mesh).
Reconfigure HCP Consul's root service mesh CA to use HCP Vault.
make configure-hcp-certs
Using kustomize, deploy the Gateway CRDs.
make configure-kubernetes
Then, set up the Terraform workspace.
- Create a new Terraform workspace.
- Choose "Version control workflow".
- Connect to GitHub.
- Choose your fork of this repository.
- Name the workspace
consul-setup
. - Select the "Advanced Options" dropdown.
- Use the working directory
consul/setup
. - Select "Create workspace".
Next, configure the workspace's variables. This Terraform configuration
retrieves a set of variables using terraform_remote_state
data source.
-
Variables should include:
tfc_organization
: your Terraform Cloud organization name
-
Environment Variables should include:
HCP_CLIENT_ID
: HCP service principal IDHCP_CLIENT_SECRET
(sensitive): HCP service principal secretAWS_ACCESS_KEY_ID
: AWS access key IDAWS_SECRET_ACCESS_KEY
(sensitive): AWS secret access keyAWS_SESSION_TOKEN
(sensitive): If applicable, the token for session
-
Queue to plan and apply. This deploys Consul clients and a terminating gateway via the Consul Helm chart to the EKS cluster to join the HCP Consul servers.
Update the terminating gateway with a write policy to the database.
make configure-terminating-gateway
Then, set up the Terraform workspace.
- Create a new Terraform workspace.
- Choose "Version control workflow".
- Connect to GitHub.
- Choose your fork of this repository.
- Name the workspace
consul-config
. - Select the "Advanced Options" dropdown.
- Use the working directory
consul/config
. - Select "Create workspace".
Next, configure the workspace's variables. This Terraform configuration
retrieves a set of variables using terraform_remote_state
data source.
-
Variables should include:
tfc_organization
: your Terraform Cloud organization name
-
Environment Variables should include:
AWS_ACCESS_KEY_ID
: AWS access key IDAWS_SECRET_ACCESS_KEY
(sensitive): AWS secret access keyAWS_SESSION_TOKEN
(sensitive): If applicable, the token for session
-
Queue to plan and apply. This does a few things, including:
- registers the database as an external service to Consul
- deploys the Consul API Gateway
- sets up the application intentions.
To add data, you need to log into the PostgreSQL database. However, it's on a private network. You need to use Boundary to proxy to the database.
-
Set up all the variables you need in your environment variables.
source set_terminal.sh
-
Run the following commands to log in and load data into the
products
database.make configure-db
If you try to log in as a user of the products
team, you can print
out the tables.
make postgres-products
You can use Consul-Terraform-Sync to read the database address from Consul and automatically configure a database secrets engine in Vault using a Terraform module.
To do this, deploy CTS to Kubernetes.
Set up the Terraform workspace.
- Create a new Terraform workspace.
- Choose "Version control workflow".
- Connect to GitHub.
- Choose your fork of this repository.
- Name the workspace
consul-cts
. - Select the "Advanced Options" dropdown.
- Use the working directory
consul/cts
. - Select "Create workspace".
Next, configure the workspace's variables. This Terraform configuration
retrieves a set of variables using terraform_remote_state
data source.
-
Variables should include:
tfc_organization
: your Terraform Cloud organization name
-
Environment Variables should include:
AWS_ACCESS_KEY_ID
: AWS access key IDAWS_SECRET_ACCESS_KEY
(sensitive): AWS secret access keyAWS_SESSION_TOKEN
(sensitive): If applicable, the token for session
To deploy the example application, run make hashicups
.
You can check if everything by checking the pods in Kubernetes.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
## omitted for clarity
frontend-5d7f97456b-2fznv 2/2 Running 0 15m
nginx-59c9dbb9ff-j9xhc 2/2 Running 0 15m
payments-67c89b9bc9-kbb9r 2/2 Running 0 16m
product-55989bf685-ll5t7 3/3 Running 0 5m5s
public-64ccfc4fc7-jd7v7 2/2 Running 0 8m17s
Check the Consul API Gateway for the address of the load balancer to connect to HashiCups.
kubectl get gateway
Deploy an example application with Datadog tracing enabled.
make expense-report
Check the Consul API Gateway for the address of the load balancer to connect to the expense reporting example application.
kubectl get gateway
Send some traffic.
curl -k https://<gateway load balancer dns>/report
To use Boundary, use your terminal in the top level of this repository.
-
Set the
BOUNDARY_ADDR
environment variable to the Boundary endpoint.source set_terminal.sh
-
Use the example command in top-level
Makefile
to SSH to the EKS nodes as the operations team.make ssh-operations
-
Go to the Boundary UI and examine the "Sessions". You should get an active session in the Boundary list because you accessed the EKS node over SSH.
Delete applications.
make clean-application
Revoke Vault credentials for applications.
make clean-vault
Disable CTS task, remove resources, and delete CTS.
make clean-cts
Go into Terraform Cloud and destroy resources
for the consul-cts
workspace.
Go into Terraform Cloud and destroy resources
for the consul-config
workspace.
Go into Terraform Cloud and destroy resources
for the consul-setup
workspace.
Remove additional Consul resources.
make clean-consul
Remove API Gateway manifests.
make clean-kubernetes
Go into Terraform Cloud and destroy resources
for the vault-consul
workspace.
Remove certificates for Consul from Vault.
make clean-certs
Go into Terraform Cloud and destroy resources
for the vault-setup
workspace.
Go into Terraform Cloud and destroy resources
for the datadog-setup
workspace.
Go into Terraform Cloud and destroy resources
for the boundary
workspace.
Go into Terraform Cloud and destroy resources
for the infrastructure
workspace.
- The demo application comes from the HashiCorp Demo Application.
- portal.cloud.hashicorp.com/sign-up
- consul.io/docs/k8s/installation/vault
- vaultproject.io/docs/secrets/pki
- consul.io/docs/nia
- vaultproject.io/docs/auth/kubernetes
- consul.io/docs/security/acl/auth-methods/kubernetes
- hashi.co/k8s-vault-consul
- hashi.co/k8s-consul-api-gateway