Suspend/Enable Autoscaling Groups
037g opened this issue · 6 comments
Autoscaling groups kick in once you shut down the EC2 instances. To prevent this you must suspend them.
It would be great if you could include them in your process.
Here are the scripts I use pre conserve and restore
Suspend Auto-scaling Groups: This will use the region from your ~/.aws/config profile
aws autoscaling describe-auto-scaling-groups --output=yaml | grep AutoScalingGroupName | awk '{print $2}' | xargs -n1 aws autoscaling suspend-processes --auto-scaling-group-name
Resume Auto-scaling Groups: This will use the region from your ~/.aws/config profile
aws autoscaling describe-auto-scaling-groups --output=yaml | grep AutoScalingGroupName | awk '{print $2}' | xargs -n1 aws autoscaling resume-processes --auto-scaling-group-name
@037g that's a good idea, there's a similar feature for ECS Fargate tasks. Will add it to to-do list.
Unfortunately v0.0.10 did not work for me. Your approach caused my instances to be terminated instead of being shut down.
The autoscaling config did not get suspended. (see last column in autoscaling group. You have to turn it on in the dashboard). AWS Docs
Instances:
https://www.dropbox.com/s/qqqi29ha9xah888/instances.png?dl=0
Autoscaling:
https://www.dropbox.com/s/r3mcowg7fifn44p/autoscaling.png?dl=0
Log Output
`AWS Cost Saver
Action: Conserve
AWS region: eu-west-1
AWS profile: default
State file: eu-awsenv.json
Dry run: no
❯ Shutdown EC2 Instances
✔ Fetch current state
❯ Conserve resources
⠙ i-083f3d4e1b9202e94 / gke-063c034a-controlplane-1
→ Stopping EC2 instance...
❯ Shutdown EC2 Instances
✔ Fetch current state
❯ Conserve resources
⠹ i-083f3d4e1b9202e94 / gke-063c034a-controlplane-1
→ Stopping EC2 instance...
❯ Shutdown EC2 Instances
✔ Fetch current state
❯ Conserve resources
✔ i-083f3d4e1b9202e94 / gke-063c034a-controlplane-1
✖ i-085dc0e4010790f01 / gke-063c034a-nodepool-0abb80dc
→ Resource is not in the state instanceStopped
✖ i-0fc6d8ae1434fc98e / gke-063c034a-nodepool-0abb80dc
→ Resource is not in the state instanceStopped
✖ i-0c540a745e4a86c95 / gke-063c034a-nodepool-0abb80dc
→ Resource is not in the state instanceStopped
✔ i-07ed551718eb85aea / gke-063c034a-controlplane-2
✖ i-0d4d63ef1a0d917d6 / gke-063c034a-nodepool-436c79f1
→ Resource is not in the state instanceStopped
✖ i-022f61c0ce0859bc3 / gke-063c034a-nodepool-436c79f1
→ Resource is not in the state instanceStopped
✖ i-02173c3a8941920c0 / gke-063c034a-nodepool-436c79f1
→ Resource is not in the state instanceStopped
✔ i-0f6ac5ceaa994fb40 / gke-03fdbaba-bastion
✔ i-07c7026aa9979b0d9 / gke-03fdbaba-management-0
✔ i-048bca573bc4d18c7 / gke-063c034a-controlplane-0
✖ i-05e9a3e4e8923c1bd / gke-063c034a-nodepool-2ed5b70c
→ This instance 'i-05e9a3e4e8923c1bd' is not in a state from which it can be s
…
✖ i-083387083b1682585 / gke-063c034a-nodepool-2ed5b70c
→ This instance 'i-083387083b1682585' is not in a state from which it can be s
…
✖ i-00e56a12b6db9d660 / gke-063c034a-nodepool-2ed5b70c
→ This instance 'i-00e56a12b6db9d660' is not in a state from which it can be s
…
✔ Stop Fargate ECS Services
✔ Stop RDS Database Instances
✔ Decrease DynamoDB Provisioned RCU and WCU
✔ Decrease Kinesis Streams Shards
✔ Stop RDS Database Clusters
✔ Scale-down Auto Scaling Groups
❯ Wrote state file to eu-awsenv.json
✖ All 7 tricks failed with errors.`
Hi @037g thanks for quick feedback and sharing your screenshot. I see that for your use-case (K8S I suppose) scaling down an ASG would not work since instances might not be stateless and scaling back up is painful or impossible!
For my use-case removing some volumes would've helped but I have added a new trick to only "suspend" processes and disabled "scaledown" by default, since it could not work for many!
I really hope bringing your cluster back up is not hard though! I had to mention this trick will set desired count to zero and terminate all instances!
To avoid same issue for other people as well I have released v0.0.11 which only suspends ASGs by default.
@aramalipoor Thanks for the quick turnaround. I'll give it another try tonight.
@aramalipoor everything worked beautifully! Thank you for your quick response and efforts.