Error in Cloudwatch Logs for Lambda
Opened this issue · 5 comments
Pete looks like I am all set, but it looks like 2 resources are trying to use the same cloudwatch group.
I torn down everything and reran to get this error from clean run.

But previously I got this indicating that 2 objects want to use the same cloudwatch group.

Not sure what those are doing yet, but figured I would point it out. I assume it does not hurt anything just causes an error in terraform apply.
This is an AWS synchronisation issue that occurs if you delete a named resource and try to recreate it again immediately. AWS has some internal consistency to reach so any subsequent attempt very quickly are likely to get this issue. The advice is to wait a few minutes and try again.
I'm getting this exact same error. Not sure how waiting a few minutes will fix this, you're trying to create 2 resources with the exact same name.
&
In order to re-apply I renamed resource "aws_cloudwatch_log_group" "object_redirect_ue1_local"
to "/aws/lambda/us-east-1.${var.site_name}_local_redirect_index_html"
@alex036 I think I see the cause now. Is the region you're deploying into also us-east-1?
There is a particular behaviour of CloudFront Lambda@Edge functions where it creates a certain log-group in the local region, but additionally us-east-1 regardless of your deployment region.
I think the actual issue here is I'm not accommodating that us-east-1 could also be the deployment region for this set-up, which would thereby collide in this scenario.
The actual fix here is to make the definition at #15 conditional on the deployment region not being us-east-1. I'll add this for action
Yes, also deploying this to us-east-1. Thanks for taking a look at this.