Modifying manifests
cemo opened this issue · 12 comments
I have an interesting case where my modified api servers manifests are reverted somehow. I wanted to enabled batch api for enabling cronjobs of kubernetes but had some issues.
- What is the cause of this revert?
- What is the correct way to modify them?
- Because cloud-init in coreos run every time that the machine complete the boot
- Modify user-dat in etcd instances, you need to stop the instance, and create a new launch configuration to auto scaling.
I have a production setup. Can I do this operation in a safe manner? :)
terraform plan -target=module.etcd.aws_instance.etcd[0]
displays only 1 instance. Can I do in this way? What do you think?
Yes, you can use this procedure. But the step one is outdated, tack no more storage manifests on s3, you will need to edit the user-data for worker nodes.
terraform plan -target=module.etcd.aws_instance.etcd[0]
I think that it cant work, because you need to stop the instance to modify user-data
when you change user data, terraform destroys and recreates instance. Do you think that it is doable in this way?
Yes, but you need to do it instance by instance in masters, or you will get etcd unhealthy
Actually that is why I wrote etcd[0]
because I was considering to do one by one.
Nice, I think that it work
I will also upgrade cluster as well. The order is matters? I mean masters first than I should update workers?
the order is not matters