kz8s/tack

Modifying manifests

cemo opened this issue · 12 comments

cemo commented

I have an interesting case where my modified api servers manifests are reverted somehow. I wanted to enabled batch api for enabling cronjobs of kubernetes but had some issues.

  1. What is the cause of this revert?
  2. What is the correct way to modify them?
  1. Because cloud-init in coreos run every time that the machine complete the boot
  2. Modify user-dat in etcd instances, you need to stop the instance, and create a new launch configuration to auto scaling.
cemo commented

I have a production setup. Can I do this operation in a safe manner? :)

cemo commented
terraform plan -target=module.etcd.aws_instance.etcd[0]

displays only 1 instance. Can I do in this way? What do you think?

Yes, you can use this procedure. But the step one is outdated, tack no more storage manifests on s3, you will need to edit the user-data for worker nodes.

terraform plan -target=module.etcd.aws_instance.etcd[0]

I think that it cant work, because you need to stop the instance to modify user-data

cemo commented

when you change user data, terraform destroys and recreates instance. Do you think that it is doable in this way?

Yes, but you need to do it instance by instance in masters, or you will get etcd unhealthy

cemo commented

Actually that is why I wrote etcd[0] because I was considering to do one by one.

Nice, I think that it work

cemo commented

I will also upgrade cluster as well. The order is matters? I mean masters first than I should update workers?

the order is not matters