aws: Allow rolling updates for ASGs
radeksimko opened this issue ยท 80 comments
Once #1109 is fixed, I'd like to be able to use Terraform to actually roll out the updated launch configuration and do it carefully.
Whoever decides to not roll out the update and change the LC association only should still be allowed to do so.
Here's an example from CloudFormation:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html
How about something like this?
resource "aws_autoscaling_group" "test" {
rolling_update_policy {
max_batch_size = 1
min_instances_in_service = 2
pause_time = "PT0S"
suspend_processes = ["launch", "terminate"]
wait_on_resource_signals = true
}
}
then if there's such policy defined, TF can use autoscaling API and shut down each EC2 instance separately and let ASG spin up a new one with an updated LC.
๐
Our use case pretty much depends on this to be crystal clean. Other wise rolling updates require some minimal external intervention which I am working on making obsolete.
Huge +1 on this
๐
๐ Likewise. Using a python script to handle this, which makes it a bit clunky to keep things simple with terraform.
๐
@radeksimko totally agreed that this is desirable behavior.
It's worth noting, though, that this is CloudFormation-specific behavior that's not exposed up at the AutoScaling API. I think the way we'd be able to achieve something similar would be to build some resources based on CodeDeploy:
https://docs.aws.amazon.com/codedeploy/latest/APIReference/Welcome.html
My experience with code deploy is that it's limited to installing software on running instances...so it can roll out updates to ASG instances but it doesn't know how to do a roll-out by 'terminating' instances like CF does.
This would be an awesome feature, but I don't really think it's something that fits into Terraform's current model very well. Perhaps if there was a separate lifecycle type hook that actions could be plugged into.
So something like:
resource 'aws_launch_configuration' 'main' {
}
resource 'aws_autoscaling_group' 'main' {
lifecycle {
on_update {
properties ['launch_configuration']
actions {
action 'aws_autoscaling_group_terminate_instances' {
batch_size = 1
}
}
}
}
}
These actions then could be a whole separate concept in Terraform.
๐ just realized this. did not realize that this was implemented by AWS as a cloudformation primitive rather than an ASG primitive we could hook into.
Is anyone experimenting with using other tools to hack around this terraform limitation? Even if we were to combine terraform with external tooling like http://docs.ansible.com/ec2_asg_module.html I'm not sure where we'd hook in. A local-exec provisioner seems like the right thing -- but those are only triggered on creation, not modification. Maybe as a placeholder before implementing a native rolling update solution terraform could offer some sort of hook for launch configuration changes that we could use to trigger an external process?
Otherwise, I think we'll need to manage AMI version updates externally via ansible or some homegrown tool and then use terraform refresh to pull them in before doing plan or apply runs. It's all starting to drift away from the single-command infrastructure creation and mutation dream we had while starting to use terraform.
As part of our migration to terraform/packer away from a set of legacy tools we had been planning a deployment model based on updating pre-baked AMIs created with packer on a rolling basis. Any other ideas for workarounds that we could use until terraform is able to do this sort of thing out of the box?
I've been using bash around the AWS cli. It would be awesome to implement tasks in Go against the awslabs library and then just call them from terraform though.
Over the weekend I wrote some code to handle blue/green deploys for our particular use case in ruby, adding to our ever growing wrapper of custom code needed to actually make terraform useful.
Rather than hooking into terraform events it's ended up as a huge hack: it dynamically modifies the terraform plan to do the create of the new ASG with new AMI alongside the existing ASG. Then it uses ruby to make sure the new ASG is up and working correctly before removing the old ASG, regenerates the terraform plan file to match the new state, and finally calls terraform refresh so that the tfstate matches the new reality we've created.
Would be great if this sort of workflow for mutating a running app when we've updated an AMI were built in or if there were at least easy ways to hook into terraform to add custom behavior like this beyond learning Go and Terraform internals. In our case, even if terraform could handle the ASG operations for us, we'd still like to be able to run our quick, custom sanity check script to make sure everything is working properly on the new instances before removing the old ASG from the pool.
This feature might be the most straightforward (although awkwardly round-about) way to get rolling deploys working inside terraform: #1083
+1.. this would make a huge difference in some of my current workflows
+1
+1
+1
@woodhull that wrapper wouldn't be around somewhere we could take a looksee, would it? ๐
@nathanielks I copy/pasted some fragments of the code here: https://gist.github.com/woodhull/c56cbd0a68cb9b3fd1f4
It's wasn't designed for sharing or reusability, sorry about that! I hope it's enough of a code fragment to help.
@woodhull you're a gentlemen, thanks for sharing!
+1
+100
If anyone is trying to achieve Blue-Green deployments with AWS Auto Scaling and Terraform, I've made a note about how I did it here. Not sure if it's an ideal solution, but it's working for me and proving to be very reliable at the moment :)
Thanks @ryandjurovich!
๐
+1 just about to look at how to do this, would love to avoid writing a script to down and up instances to use the new Launch Configuration will take a look at @ryandjurovich's script also :)
๐
+1
I'm sure there is a genuine need for rolling updates to an ASG, so wont detract from this too much, but anyone reading this issue that is OK with blue/green deployment as opposed to rolling updates, I've done so successfully and cleanly by using Terraform built in functionality, as suggested in this really useful post apparently its how HashiCorp do their production rollouts :)
@ryan0x44 I'm afraid your gist link is dead
Hi @farridav,
Blue/Green deployments are a better solution for stateless services (such as: web servers, web services), but for stateful services (such as: Databases, SearchIndexes, Caches, Queues) it is often not possible or not advisable to do so.
Immagine a 10 nodes db cluster with 10TB of data, spinning up a complete new cluster will cause the full resync of the 10TB of data from the old cluster to the new cluster all at once, this might saturate the network link and cause denial of service, where maybe all you wanted was to increase the number of connections.
And the size of the data is not the only problem. Clusters which support ring or grid topologies are have less problems to grow/shrink elastically, but master/slave topologies are more complicated as you can grow only slaves and you have to have a procedure to step down the master and elect a slave.
The possibility to do a controlled rolling update, in these type of situations is far simpler and has less impact (eg: rolling only one data node at the time, wait for data resync, then move to the next).
@farridav as per the post https://groups.google.com/forum/#!msg/terraform-tool/7Gdhv1OAc80/iNQ93riiLwAJ
Everything seems to be working as expected
1 - Updating the ami creates new launch config
2 - Updates the autoscaling to new launch config name
However new machines are not launching automatically using the new LC/ASG created . What could be the issue here ?
@titotp check the activity history tab on the ASG, should tell you why its unable to launch the instances.
I am not seeing any activity on the autoscaling group
This looks like a wonderful feature! ๐
+1
+1
I'd enjoy the simplicity of immutable infra. Moving away from deploying new artifacts to existing instances, and instead just phasing in new instances/containers. Without a terraform construct to do so it is a hard sell.
Additionally, having to hack together a solution that requires external scripting and/or more than one apply limits the ability to make use of Atlas. I would enjoy throwing out ansible deploy playbooks, aws code-deploy, fab, and all the complexity they introduce.
+1
+1
+1
๐
๐
+1
I wrote a blog post based on Paul Hinze's example: http://robmorgan.id.au/post/139602239473/rolling-deploys-on-aws-using-terraform
+1
+1
+1
I tried out the Hashicorp rolling deployment approach and while it works well with a fixed-size ASG, it doesn't work with one that is dynamically sized (e.g. in response to increased load). I didn't want to write my own hacky rolling deployment script and decided to see if I could somehow leverage the rolling deployments provided by the CloudFormation UpdatePolicy.
To make it work, I defined everything as usual in my Terraform templates, except for the Auto Scaling Group, which I defined using CloudFormation in a Terraform aws_cloudformation_stack resource:
resource "aws_cloudformation_stack" "autoscaling_group" {
name = "my-asg"
template_body = <<EOF
{
"Resources": {
"MyAsg": {
"Type": "AWS::AutoScaling::AutoScalingGroup",
"Properties": {
"AvailabilityZones": ["us-east-1a", "us-east-1b", "us-east-1d"],
"LaunchConfigurationName": "${aws_launch_configuration.launch_configuration.name}",
"MaxSize": "4",
"MinSize": "2",
"LoadBalancerNames": ["${aws_elb.elb.name}"],
"TerminationPolicies": ["OldestLaunchConfiguration", "OldestInstance"],
"HealthCheckType": "ELB"
},
"UpdatePolicy": {
"AutoScalingRollingUpdate": {
"MinInstancesInService": "2",
"MaxBatchSize": "2",
"PauseTime": "PT0S"
}
}
}
}
}
EOF
}
CloudFormation is, of course, more verbose than Terraform, and the plan
command is not as helpful (the diff between the two CloudFormation templates is a bit hard to read). However, you can still reference Terraform resources in the CloudFormation template (e.g., note the reference to my launch configuration aws_launch_configuration.launch_configuration.name
), and now, every time I update my launch configuration, I get:
- An automatic, rolling deployment.
- Automatic rollback if the deployment fails.
- A log of the deployment in the CloudFormation console.
Of course, it would even better if this was built-in to Terraform (perhaps even by wrapping my solution above behind a Terraform resource, given that rolling deployments aren't exposed in the APIs), but this seems to be a passable workaround for now.
@brikis98 how do you associate an aws_autoscaling_policy
to an ASG generated by Cloudformation?
The autoscaling_group_name
is dynamically generated by Cloudformation itself and not exported by aws_cloudformation_stack
; is something that is done manually?
@maruina: You can add an output to the CloudFormation template and use it without any manual steps:
resource "aws_cloudformation_stack" "autoscaling_group" {
name = "my-asg"
template_body = <<EOF
{
"Resources": {
"MyAsg": {
"Type": "AWS::AutoScaling::AutoScalingGroup",
"Properties": {
"AvailabilityZones": ["us-east-1a", "us-east-1b", "us-east-1d"],
"LaunchConfigurationName": "${aws_launch_configuration.launch_configuration.name}",
"MaxSize": "4",
"MinSize": "2",
"LoadBalancerNames": ["${aws_elb.elb.name}"],
"TerminationPolicies": ["OldestLaunchConfiguration", "OldestInstance"],
"HealthCheckType": "ELB"
},
"UpdatePolicy": {
"AutoScalingRollingUpdate": {
"MinInstancesInService": "2",
"MaxBatchSize": "2",
"PauseTime": "PT0S"
}
}
}
},
"Outputs": {
"AsgName": {
"Description": "The name of the auto scaling group",
"Value": {"Ref": "MyAsg"}
}
}
}
EOF
}
resource "aws_autoscaling_policy" "my_policy" {
name = "my-policy"
scaling_adjustment = 4
adjustment_type = "ChangeInCapacity"
cooldown = 300
autoscaling_group_name = "${aws_cloudformation_stack.autoscaling_group.outputs.AsgName}"
}
Note the use of aws_cloudformation_stack.autoscaling_group.outputs.AsgName
in the aws_autoscaling_policy
.
Sweet, thank you! ๐
@rvangundy Do you have create_before_destroy = true
on your LC?
@brikis98 Yeah I sure do. What's weird is I'm seeing that as the CloudFormation stack is being modified, it's actively trying to destroy the previous LC as well,
module.my_service.aws_launch_configuration.lc: Creation complete
module.my_service.aws_launch_configuration.lc: Destroying...
module.my_service.aws_launch_configuration.lc: Destroying...
module.my_service.aws_launch_configuration.lc: Still destroying... (10s elapsed)
module.my_service.aws_launch_configuration.lc: Still destroying... (10s elapsed)
module.my_service.rolling_autoscaling_group.aws_cloudformation_stack.autoscaling_group: Still modifying... (10s elapsed)
module.my_service.aws_launch_configuration.lc: Still destroying... (18s elapsed)
module.my_service.aws_launch_configuration.lc: Still destroying... (20s elapsed)
module.my_service.aws_launch_configuration.lc: Still destroying... (20s elapsed)
module.my_service.rolling_autoscaling_group.aws_cloudformation_stack.autoscaling_group: Still modifying... (20s elapsed)
module.my_service.aws_launch_configuration.lc: Still destroying... (28s elapsed)
module.my_service.aws_launch_configuration.lc: Still destroying... (30s elapsed)
Note also that I've encapsulated my CloudFormation+ASG configuration in a module, so I may try to pull that out next...
@rvangundy Hmm, not sure. If you post your code, I can see if there are any obvious differences, but for the most part, the ASG setup with CloudFormation has worked reasonably well for me.
@brikis98 Fixed it! It was because I had externalized my CloudFormation template in to a template_file
resource which decouples the dependency graph between the LC and CloudFormation stack. I inlined it using <<EOF...EOF
and that did the trick.
@rvangundy Ah, I have mine inlined with a heredoc, which is probably why I didn't hit that issue. Good to know, thanks!
Regarding "Immagine a 10 nodes db cluster with 10TB of data, spinning up a complete new cluster will cause the full resync of the 10TB of data from the old cluster to the new cluster all at once, this might saturate the network link and cause denial of service, where maybe all you wanted was to increase the number of connections."
@BrunoBonacci , we are facing the same situation about rolling update. Imagine we want to bump up one version of the software running on a data node, we need the kind of rolling update "in-place". It looks like the rolling update with TF is not going to get you there. Maybe we should consider something like Ansible to deal with that?
@shuoy Certainly Ansible is a way to "script" this behaviour but the idea of using terraform (at least for me) is to replace previous scripting tools such as Chef, puppet, Ansible as so on.
I've seen different approaches to rolling update around. Kubernetes allow you the set a grace time to wait between the update of a machine to the next. This certainly could solve some of the issues, however it would work only for quite short grace time.
If you have 10TB to replicate in a cloud environment it could take a while, specially if you throttle the speed to avoid network saturation. Having a grace time of 10-20 hours wouldn't be reasonable.
I think the right approach it would be to provide an attribute which can be set to choose if the rolling update has to be performed automatically (based on grace period) or in stages.
something like:
## just a suggestion
rolling_update = "auto"
grace_period = "3m"
Which it will wait 3 minutes between a successful update to the next.
While the stage rolling update could work as follow:
## just a suggestion
rolling_update = "staged"
instances_updated = 1
In this case terraform would look at the ASG for example of 10 machines, and just update one.
Then the operator would wait for the new instance to be fully in service before
bumping the instances_updated = 2
and repeat terraform apply
.
In such way the rolling update could be performed in stages giving enough time
to the app to replicate necessary data and become fully operative.
Once all 10 instances have been updated then update can be considered successful.
Again, this is just a suggestion to allow a declarative (non scripted) approach to rolling update,
which would work for stateful clusters (such as databases).
@BrunoBonacci "If you have 10TB to replicate in a cloud environment it could take a while, specially if you throttle the speed to avoid network saturation. Having a grace time of 10-20 hours wouldn't be reasonable."
-- My point is this. For a node that following into this category, Chef/Ansible is exactly the tool you will be using to have an "in-place update way"; the destroying the old node and bringing up a new node (immutable way) is exactly what you want to avoid.
-- In the in-place-update way (chef/ansible), you basically apt-get update your mysqld (which take 1-2 minutes); yet the "destroy old and bring a new one way" (immutable way) takes hours unless you are using EBS volume. But I think you meant to say non-EBS way, or maybeI missed your point above.
Do we have scenario that large volume of data reside on ephemeral disk? Yes, e.g., for people use EC2 as their Hadoop cluster capacity, data is saved on ephemeral disk from cost saving perspective (EBS gives you extra cost).
So, in short, I think Terraform is great at certain scenarios, but it's not accurate that Chef/Ansible can be totally replaced. Particularly in the update use case for stateful nodes.
@brikis98 and others that are using the output example posted here, it seems to not work if you upgrade to 0.7.0-rc2. Here is the Error that is kicking out:
* Resource 'aws_cloudformation_stack.web_autoscaling_group' does not have attribute 'outputs.AsgName' for variable 'aws_cloudformation_stack.web_autoscaling_group.outputs.AsgName'
I am still trying to get outputs working but if anyone has any advice on how to get this working again with 0.7.x that would be awesome.
@jdoss on first glance, it looks like that reference is using the pre-0.7 map dot-index notation. in 0.7, maps are indexed like aws_instance.tag["sometag"]
So try changing this line to use square bracket indexing and see if that fixes it for you!
autoscaling_group_name = "${aws_cloudformation_stack.autoscaling_group.outputs["AsgName"]}"
If that doesn't work I'd welcome a fresh bug report and we can dig in. ๐
@rvangundy -- did you keep the lifecycle hook create_before_destroy=true
or remove it? I think I am thrashing between this error and this #3916
@moofish32 I have create_before_destroy=true
on my aws_launch_configuration
@brikis98 - Thank you for the piece of CloudFormation code which does rolling deployments. It does not work for me, because I use spot instances in my launch configuration and when I specify MinInstancesInService
greater than zero (because I really want to not terminate all instances before rolling) I have this error from AWS:
Autoscaling rolling updates cannot be performed because the current launch configuration is using spot instances and MinInstancesInService is greater than zero.
I wonder if anyone has any ideas how to implement ASG rolling update for spot instances using less moving parts?
My initial idea was to make a shell script which would work similarly to aws-ha-release.
I prefer to use just Terraform and CloudFormation and avoid implementation of orchestration magic.
UPD: Having MinInstancesInService=0
and MaxBatchSize=1
(which is smaller than amount of instances in ASG) helped. I am happy.
+1
Is there any plan to implement this in the near/far future?
+1
+1
+1
@phinze How about implementing this as a data source? it doesn't seem like the worst way of implementing it using current code structure.
I immediately regret this suggestion.
I don't mind implementing this feature, but I'm kind of at a loss on how to implement it in HCL. As the comment above would suggest, I thought about using the data source format, but linking it back to the ASG resource (Something like a UpdatePolicy = "${data.aws_asg_policy.foo}"
; this concept doesn't seem horrid, but leaves room for interpretation.
Please advise on the most idiomatic way this could be represented in HCL.
+1
+1
+1
+1
Hi folks,
we do appreciate the +1's if these don't generate notifications. ๐
Therefore it's more helpful for everyone to use reactions as we can then sort issues by the number of ๐ :
https://github.com/hashicorp/terraform/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc+label%3Aprovider%2Faws
The ๐ reactions do count and we're more than happy for people to use those and prefer over "+1" comments for the mentioned reasons.
Thanks.
๐
Hi everyone,
While this sort of sophisticated behavior would definitely be useful, we (the Terraform team) don't have any short-term plans to implement this ourselves since we're generally recommending that people with these needs consider other solutions such as EC2 Container Service and Nomad, which either have or are more likely to grow sophisticated mechanisms for safe rollout and are in a better position to do so due to the ability to manage such multi-step state transitions.
We're trying to prune stale feature requests (that aren't likely to be addressed soon) by closing them. In this case we're currently leaning towards not implementing significant additional behaviors on top of what the EC2 API natively supports, so I'm going to close this.
After chatting with @apparentlymart privately I just want to add a few more things.
We do not suggest everyone should use containers (nor that containers solve the problem entirely) and for those who prefer not to, there's a workaround - you can use aws_cloudformation_stack
similar way the folks at Travis did.
Also I'm tracking this issue in my own TODO list, to not forget about it. I personally want to get this done, but it's currently pretty low on my list of priorities, PRs are certainly welcomed.