/terraform-aws-elasticsearch

Module that deploys an Elasticsearch cluster

Primary LanguageHCL

terraform-aws-elasticsearch

The module deploys a multi-node Elasticsearch cluster.

Usage

Dependencies

The module requires several additional components that are needed to provision the Elasticsearch cluster.

  • At least two subnets to place a load balancer and autoscaling group.
  • Route53 zone - the cluster will have an HTTPS endpoint for the cluster.

Service network

The easiest way to create subnets in AWS is to use the Service Network Terraform module.

Typical configuration would include at least two public and two private subnets.

module "service-network" {
  source                = "infrahouse/service-network/aws"
  version               = "~> 2.0"
  service_name          = "elastic"
  vpc_cidr_block        = "10.1.0.0/16"
  management_cidr_block = "10.1.0.0/16"
  subnets = [
    {
      cidr                    = "10.1.0.0/24"
      availability-zone       = data.aws_availability_zones.available.names[0]
      map_public_ip_on_launch = true
      create_nat              = true
      forward_to              = null
    },
    {
      cidr                    = "10.1.1.0/24"
      availability-zone       = data.aws_availability_zones.available.names[1]
      map_public_ip_on_launch = true
      create_nat              = true
      forward_to              = null
    },
    {
      cidr                    = "10.1.2.0/24"
      availability-zone       = data.aws_availability_zones.available.names[0]
      map_public_ip_on_launch = false
      create_nat              = false
      forward_to              = "10.1.0.0/24"
    },
    {
      cidr                    = "10.1.3.0/24"
      availability-zone       = data.aws_availability_zones.available.names[1]
      map_public_ip_on_launch = false
      create_nat              = false
      forward_to              = "10.1.1.0/24"
    }
  ]
}

Route53 Zone

The module will create an A record for the cluster in a specified zone. If the cluster name (passed as var.cluster_name) is 'elastic', the client URL is going to be https://elastic.ci-cd.infrahouse.com. The zone can be created in the same Terraform module or accessed as a data source.

data "aws_route53_zone" "cicd" {
  name = "ci-cd.infrahouse.com"
}

Bootstrapping cluster

Any new cluster needs to be bootstrapped first. Let's say we want to create a three node cluster. Declare the cluster and add bootstrap_mode = true to the module inputs. The size of the autoscaling group will be not three, but one node.

module "test" {
  module "test" {
    source    = "registry.infrahouse.com/infrahouse/elasticsearch/aws"
    version   = "~> 1.3"
    providers = {
      aws     = aws
      aws.dns = aws
    }
    internet_gateway_id = module.service-network.internet_gateway_id
    key_pair_name       = aws_key_pair.test.key_name
    subnet_ids          = module.service-network.subnet_public_ids
    zone_id             = data.aws_route53_zone.cicd.zone_id
    bootstrap_mode      = true
  }
}

Provisioning remaining nodes

After the cluster is bootstrapped, disable the bootstrap mode.

diff --git a/test_data/test_module/main.tf b/test_data/test_module/main.tf
index c13df0d..33cf0d3 100644
--- a/test_data/test_module/main.tf
+++ b/test_data/test_module/main.tf
@@ -12,5 +12,5 @@ module "test" {
   subnet_ids                    = module.service-network.subnet_private_ids
   zone_id                       = data.aws_route53_zone.cicd.zone_id
-  bootstrap_mode                = true
+  bootstrap_mode                = false
 }

Accessing the cluster

The module creates three endpoints to access the cluster. All three of them are output variables of the module.

  • Master nodes: https://${var.cluster_name}-master.${data.aws_route53_zone.cluster.name} or https://${var.cluster_name}.${data.aws_route53_zone.cluster.name}
  • Data nodes: https://${var.cluster_name}-data.${data.aws_route53_zone.cluster.name}

Requirements

Name Version
terraform ~> 1.5
aws ~> 5.11
cloudinit ~> 2.3
random ~> 3.6

Providers

Name Version
aws ~> 5.11
aws.dns ~> 5.11
random ~> 3.6

Modules

Name Source Version
elastic_cluster registry.infrahouse.com/infrahouse/website-pod/aws 3.2.1
elastic_cluster_data registry.infrahouse.com/infrahouse/website-pod/aws 3.2.1
elastic_data_userdata infrahouse/cloud-init/aws = 1.11.1
elastic_master_userdata infrahouse/cloud-init/aws = 1.11.1
update-dns registry.infrahouse.com/infrahouse/update-dns/aws 0.6.0
update-dns-data registry.infrahouse.com/infrahouse/update-dns/aws 0.6.0

Resources

Name Type
aws_s3_bucket.snapshots-bucket resource
aws_s3_bucket_public_access_block.public_access resource
aws_secretsmanager_secret.elastic resource
aws_secretsmanager_secret.kibana_system resource
aws_secretsmanager_secret_version.elastic resource
aws_secretsmanager_secret_version.kibana_system resource
aws_security_group.backend_extra resource
aws_vpc_security_group_ingress_rule.backend_extra_reserved resource
random_password.elastic resource
random_password.kibana_system resource
random_string.bucket_prefix resource
random_string.profile-suffix resource
aws_ami.ubuntu data source
aws_availability_zones.available data source
aws_caller_identity.current data source
aws_iam_policy_document.elastic_permissions data source
aws_iam_policy_document.secrets-permission-policy data source
aws_iam_role.caller_role data source
aws_region.current data source
aws_route53_zone.cluster data source
aws_subnet.selected data source

Inputs

Name Description Type Default Required
asg_ami Image for EC2 instances string null no
asg_health_check_grace_period ASG will wait up to this number of seconds for instance to become healthy number 900 no
bootstrap_mode Set this to true if the cluster is to be bootstrapped bool true no
cluster_data_count Number of data nodes in the cluster number 3 no
cluster_master_count Number of master nodes in the cluster number 3 no
cluster_name How to name the cluster string "elastic" no
environment Name of environment. string "development" no
extra_files Additional files to create on an instance.
list(object({
content = string
path = string
permissions = string
}))
[] no
extra_repos Additional APT repositories to configure on an instance.
map(object({
source = string
key = string
}))
{} no
instance_type Instance type to run the elasticsearch node string "t3.medium" no
internet_gateway_id Not used, but AWS Internet Gateway must be present. Ensure by passing its id. string n/a yes
key_pair_name SSH keypair name to be deployed in EC2 instances string n/a yes
max_instance_lifetime_days The maximum amount of time, in _days_, that an instance can be in service, values must be either equal to 0 or between 7 and 365 days. number 0 no
packages List of packages to install when the instances bootstraps. list(string) [] no
puppet_debug_logging Enable debug logging if true. bool false no
puppet_hiera_config_path Path to hiera configuration file. string "{root_directory}/environments/{environment}/hiera.yaml" no
puppet_module_path Path to common puppet modules. string "{root_directory}/modules" no
puppet_root_directory Path where the puppet code is hosted. string "/opt/puppet-code" no
root_volume_size Root volume size in EC2 instance in Gigabytes number 30 no
secret_elastic_readers List of role ARNs that will have permissions to read elastic superuser secret. list(string) null no
smtp_credentials_secret AWS secret name with SMTP credentials. The secret must contain a JSON with user and password keys. string null no
snapshot_bucket_prefix A string prefix to a bucket name for snapshots. Random by default. string null no
subnet_ids List of subnet ids where the elasticsearch instances will be created list(string) n/a yes
ubuntu_codename Ubuntu version to use for the elasticsearch node string "jammy" no
zone_id Domain name zone ID where the website will be available string n/a yes

Outputs

Name Description
cluster_data_url HTTPS endpoint to access the cluster data nodes
cluster_master_url HTTPS endpoint to access the cluster masters
cluster_url HTTPS endpoint to access the cluster
data_instance_role_arn EC2 instance profile will have a role wi
elastic_password Password for Elasticsearch superuser elastic.
elastic_secret_id AWS secret that stores password for user elastic.
kibana_system_password A password of kibana_system user
kibana_system_secret_id AWS secret that stores password for user kibana_system
master_instance_role_arn EC2 instance profile will have a role wi
snapshots_bucket AWS S3 Bucket where Elasticsearch snapshots will be stored.