terraform-aws-modules/terraform-aws-opensearch

master_user_password and access_policy_statements attributes are forcing constant updates

Closed this issue · 4 comments

Please provide a clear and concise description of the issue you are encountering, and a reproduction of your configuration (see the examples/* directory for references that you can copy+paste and tailor to match your configs if you are unable to copy your exact configuration). The reproduction MUST be executable by running terraform init && terraform apply without any further changes.

If your request is for a new feature, please use the Feature request template.

  • ✋ I have searched the open/closed issues and my issue is not listed.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Module version [Required]: 1.4.0

  • Terraform version: 1.6.0

  • Provider version(s): 5.0

Reproduction Code [Required]

module "opensearch" {
  source  = "terraform-aws-modules/opensearch/aws"
  version = "1.4.0"

  # Cluster configuration
  domain_name    = "opensearch"
  engine_version = var.opensearch.engine_version


  software_update_options = {
    auto_software_update_enabled = var.opensearch.auto_software_update_enabled
  }

  # Security Group
  security_group_name = "sg-opensearch"
  security_group_rules = {
    ingress_443 = {
      type        = "ingress"
      description = "HTTPS access from VPC"
      from_port   = 443
      to_port     = 443
      ip_protocol = "tcp"
      cidr_ipv4   = module.vpc.vpc_cidr_block
    }
    egress_all = {
      type        = "egress"
      description = "Allow all traffic out"
      from_port   = -1
      to_port     = -1
      ip_protocol = "-1"
      cidr_ipv4   = module.vpc.vpc_cidr_block
    }
  }

  # Networking configuration
  vpc_options = {
    subnet_ids = slice(module.vpc.private_subnets, 0, 1)
  }

  # Advanced options configuration
  advanced_options = {
    "rest.action.multi.allow_explicit_index" = "true"
  }

  advanced_security_options = {
    enabled                        = false
    anonymous_auth_enabled         = true
    internal_user_database_enabled = true

    master_user_options = {
      master_user_name     = var.opensearch.master_username
      master_user_password = data.aws_secretsmanager_random_password.main["opensearch"].random_password
    }
  }

  auto_tune_options = {
    desired_state       = var.opensearch.auto_tune
    rollback_on_disable = "NO_ROLLBACK"
  }

  cluster_config = {
    instance_count           = var.opensearch.instances_count
    dedicated_master_enabled = false
    dedicated_master_type    = var.opensearch.dedicated_master_type
    instance_type            = var.opensearch.instance_type

    zone_awareness_enabled = false
  }

  domain_endpoint_options = {
    enforce_https       = true
    tls_security_policy = var.opensearch.tls_security_policy
  }

  # Storage configuration
  ebs_options = {
    ebs_enabled = var.opensearch.ebs.ebs_enabled
    iops        = var.opensearch.ebs.iops
    throughput  = var.opensearch.ebs.throughput
    volume_type = var.opensearch.ebs.volume_type
    volume_size = var.opensearch.ebs.volume_size
  }

  # Encrytion configuration
  encrypt_at_rest = {
    enabled = true
  }

  node_to_node_encryption = {
    enabled = var.opensearch.node_to_node_encryption
  }

  # Access Policy
  access_policy_statements = [
    {
      effect = "Allow"

      principals = [{
        type        = "*"
        identifiers = ["*"]
      }]

      actions = ["es:*"]
    }
  ]
}

variable "opensearch" {
  type        = any
}
### These are  the TF Vars values 

opensearch = {
  instances_count              = 1
  dedicated_master_type        = "t3.small.search"
  instance_type                = "t3.small.search"
  engine_version               = "OpenSearch_2.7"
  node_to_node_encryption      = false
  auto_software_update_enabled = false
  master_username              = "admin"
  tls_security_policy          = "Policy-Min-TLS-1-2-2019-07"
  auto_tune                    = "DISABLED"

  ebs = {
    ebs_enabled = true
    iops        = 3000
    throughput  = 125
    volume_type = "gp3"
    volume_size = 30
  }
}

Steps to reproduce the behavior:

terraform init
terraform apply
terraform plan

Expected behavior

Resources get applied without drift

Actual behavior

When running terraform plan after applying the changes there is a constant drift for the following:

  # module.opensearch.data.aws_iam_policy_document.this[0] will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "aws_iam_policy_document" "this" {
      + id                        = (known after apply)
      + json                      = (known after apply)
      + minified_json             = (known after apply)
      + override_policy_documents = []
      + source_policy_documents   = []

      + statement {
          + actions   = [
              + "es:*",
            ]
          + effect    = "Allow"
          + resources = [
              + "arn:aws:es:us-west-2:<REDACTED>:domain/<REDACTED>-opensearch/*",
            ]

          + principals {
              + identifiers = [
                  + "*",
                ]
              + type        = "*"
            }
        }
    }

  # module.opensearch.aws_opensearch_domain.this[0] will be updated in-place
  ~ resource "aws_opensearch_domain" "this" {
        id                 = "arn:aws:es:us-west-2:<REDACTED>:domain/<REDACTED>-opensearch"
        tags               = {
            "terraform-aws-modules" = "opensearch"
        }
        # (11 unchanged attributes hidden)

      ~ advanced_security_options {
            # (3 unchanged attributes hidden)

          ~ master_user_options {
              ~ master_user_password = (sensitive value)
                # (1 unchanged attribute hidden)
            }
        }

        # (14 unchanged blocks hidden)
    }

  # module.opensearch.aws_opensearch_domain_policy.this[0] will be updated in-place
  ~ resource "aws_opensearch_domain_policy" "this" {
      ~ access_policies = jsonencode(
            {
              - Statement = [
                  - {
                      - Action    = "es:*"
                      - Effect    = "Allow"
                      - Principal = "*"
                      - Resource  = "arn:aws:es:us-west-2:<REDACTED>:domain/<REDACTED>opensearch/*"
                    },
                ]
              - Version   = "2012-10-17"
            }
        ) -> (known after apply)
        id              = "esd-policy-<REDACTED>-opensearch"
        # (1 unchanged attribute hidden)
    }

Terminal Output Screenshot(s)

Additional context

The "master_user_password" and "access_policy_statements" are forcing the update of the resource everytime that we run a plan/apply

thats because you are generating a random password on every apply, no?

master_user_password = data.aws_secretsmanager_random_password.main["opensearch"].random_password

thats because you are generating a random password on every apply, no?

master_user_password = data.aws_secretsmanager_random_password.main["opensearch"].random_password

That seems to be the issue. Based on this open issue, this is the expected behavior.

As a reference, this was the workaround we implemented.

resource "aws_secretsmanager_secret" "main" {
  name        = "opensearch-secret"
  description = "Master user password for Amazon OpenSearch"
}

resource "aws_secretsmanager_secret_version" "main" {
  secret_id     = aws_secretsmanager_secret.main.id
  secret_string = data.aws_secretsmanager_random_password.main["opensearch"].random_password

  lifecycle {
    ignore_changes = [
      secret_string
    ]
  }
}


module "opensearch" {
  source  = "terraform-aws-modules/opensearch/aws"
  version = "1.4.0"

  advanced_security_options = {
     ...
    master_user_options = {
      master_user_name     = var.opensearch.master_username
      master_user_password = aws_secretsmanager_secret_version.main.secret_string
    }
  }

I think we're good closing this issue, thanks!

sounds good - also, you are using a secret from secret manager there but you aren't actually using any benefits of secrets manager. you could achieve the same thing by using an SSM parameter, and its cheaper

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.