redhat-cop/openshift-applier

Multi-Cluster support

robkohler opened this issue · 3 comments

Issue is when running, oc is assumed to be logged into cluster you want to configure.
Would like ability to specify the .kube/config file using oc --config xyz. But need it on ALL the "oc" lines. This way I can have multiple configs for each cluster and run as local user.

I am currently separating tower to run these as different users, each having pre configured kube config which are auto logged in. Issue with current solution is because ansible is remote (only way to force new user/privs), it has to copy all the files over, and takes over 20 minutes to configure my cluster. Very inefficient.
Other ideas welcome. If no other ideas, I'll branch and offer to add the config file or context as a var for all "oc" commands as I'm really wanting to configure 4+ clusters simutaneously without having to worry if "oc" is pointing to wrong cluster during run, all with same user.

oybed commented

Please see this WIP (which has been open for a loooong time) around multi-cluster support: #13
... it's basically setting the KUBECONFIG env variable to point to the "correct" cluster before applying content.

Since the WIP has been open for this long, I'm sure there are multiple areas that require re-work, but just wanted to call it out as something that has been considered and something that can potentially be completed.

BTW: the remote v.s. local execution - the openshift-applier has support for this and will copy necessary files to the target system if/when needed. As you mentioned, this can increase the execution time quite drastically, however.

I wanted to throw an alternative solution out there.... more and more, I am seeing value in trying to make applier as kubernetes-native as possible (i.e. making the openshift-specific bits optional. see #129 ). So what if, rather than doing anything with oc login, we added to applier the ability to set the current-context within a kubeconfig file?

Here's an example:

$ oc login cluster.dev.myorg.com:6443
$ oc login cluster.int.myorg.com:6443
$ oc login cluster.prd.myorg.com:6443

$ cat ~/.kube/config | yq .contexts[].name
"default/cluster-dev-myorg-com:6443/kube:admin"
"default/cluster-int-myorg-com:6443/kube:admin"
"default/cluster-prd-myorg-com:6443/kube:admin"

$ cat ~/.kube/config | yq -r '.["current-context"]'
"default/cluster-dev-myorg-com:6443/kube:admin"
openshift_cluster_content:
  - object: Stuff that goes everywhere
    cluster-contexts:
      - "default/cluster-dev-myorg-com:6443/kube:admin"
      - "default/cluster-int-myorg-com:6443/kube:admin"
      - "default/cluster-prd-myorg-com:6443/kube:admin"
    content:
      ....
  - object: Dev Only
    cluster-contexts:
      - "default/cluster-dev-myorg-com:6443/kube:admin"
    content:
       ....

so the user's responsibility is to make sure they are logged in to all 3 clusters before running applier, and then applier doesn't have to manage credentials of any kind.

oybed commented

Yes, I do like the idea of using the context aspect - e.g: just passing the --context flag would do for oc and kubectl. I know k8s ansible modules also supports this - i.e.: k8s_facts, so I think that would align well.