terraform-aws-modules/terraform-aws-lambda

rebuild deployment package happening by timestamp

bpgould opened this issue · 5 comments

Description

The docs for the module state "Hash of zip-archive created with the same content of the files is always identical which prevents unnecessary force-updates of the Lambda resources unless content modifies. If you need to have different filenames for the same content you can specify extra string argument hash_extra."

However, I am getting replacement based on timestamp.

Checking the source code I see exactly what I expected: https://github.com/terraform-aws-modules/terraform-aws-lambda/blob/master/package.tf

on line 64

triggers = {
    filename  = data.external.archive_prepare[0].result.filename
    timestamp = data.external.archive_prepare[0].result.timestamp
  }

this does not align with the expected behavior of ' prevents unnecessary force-updates of the Lambda resources unless content modifies'. I would like to be able to disable timestamp based replacement without having to fork the module and delete the trigger.

Please provide a clear and concise description of the issue you are encountering, and a reproduction of your configuration (see the examples/* directory for references that you can copy+paste and tailor to match your configs if you are unable to copy your exact configuration). The reproduction MUST be executable by running terraform init && terraform apply without any further changes.

If your request is for a new feature, please use the Feature request template.

  • ✋ I have searched the open/closed issues and my issue is not listed.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Module version [Required]: 4.13.0 [latest]

  • Terraform version: 1.4.2

  • Provider version(s):

Reproduction Code [Required]

module "lambda_function" {
  source  = "terraform-aws-modules/lambda/aws"
  version = "4.13.0"

  depends_on = [
    aws_cloudwatch_log_group.lambda-log-group
  ]

  function_name                     = var.function_name
  description                       = var.description
  handler                           = var.handler
  runtime                           = var.runtime
  timeout                           = var.timeout
  publish                           = true
  use_existing_cloudwatch_log_group = true
  attach_cloudwatch_logs_policy     = false
  create_role                       = false

  lambda_role = aws_iam_role.lambda-role.arn

  source_path = "../lambdas/${var.function_name}/"

  store_on_s3   = true
  s3_bucket     = var.s3_bucket_id
  artifacts_dir = "builds/${var.function_name}/"

  tags = var.tags
}

Steps to reproduce the behavior:

no

yes

using module inside my own module and getting unexpected behavior

Expected behavior

The deployment package should only be updated with the code changes and the hash of the zip changes.

Actual behavior

Every build is CICD is causing all deployment packages to get re-built based on the timestamp trigger.

Terminal Output Screenshot(s)

Additional context

All of my lambdas in CICD get this:

# module.update_snow_success.module.lambda_function.null_resource.archive[0] must be replaced
-/+ resource "null_resource" "archive" {
      ~ id       = "3888062224359755732" -> (known after apply)
      ~ triggers = { # forces replacement
          ~ "timestamp" = "1685457166975563000" -> "1685561913544040000"
            # (1 unchanged element hidden)
        }
    }

appears to be a duplicate to #396

This issue has been automatically marked as stale because it has been open 30 days
with no activity. Remove stale label or comment or this issue will be closed in 10 days

This issue was automatically closed because of stale in 10 days

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.