admiraltyio/admiralty

Deploy and run admiralty agent on a separate cluster?

mikeshng opened this issue · 5 comments

Hi, I am wondering if it's possible to run admiralty agent on cluster foo but the agent is actually for cluster bar?

The scenario I want to accomplish is a cluster with no pods that is running on it. Currently, I can use admiralty to delegate all new workloads to other clusters but the admiralty agent still requires deployment(s) and pod(s).

If it's not possible, can you please provide guidance on how to accomplish this with some code changes to do a POC? I am willing to drive the POC implementation. Thanks.

Hi Mike,

The code allows it, but not the Helm chart.

  1. CRDs, RBAC resources, and the MutatingWebhookConfiguration (if cluster bar is a source) must still be installed in cluster bar, while the rest can move to cluster foo.
  2. The MutatingWebhookConfiguration (in cluster bar if it is a source) must be configured to point to the Service in cluster foo, which must be routable from cluster bar's kube-apiserver.
  3. The Deployments must have the KUBECONFIG environment variable set, pointing to a mounted kubeconfig file that allows pods in cluster foo to control cluster bar. It will be used instead of the local service account.

Please keep me posted on your POC and let me know if you have any questions along the way. When you're done, could you please contribute documentation and/or Helm chart options for this use case?

Adrien thank you once again for your help. I was not able to proceed with the POC due to some minor road blocks that I do not have the cycles to resolve at the moment. Some of the those things are:

  • compatibility with OCP 4.9
  • all the CRD to be v1
  • cert-manager also needs to run on a cluster foo.

Hopefully, I might have time later and revisit. Sorry for not able to contribute.

Hi Mike,

compatibility with OCP 4.9

Compatibility with Kubernetes 1.22 is planned for this month, hopefully this week. OpenShift RBAC fixes are also in the works, cf. #134.

all the CRD to be v1

Do you mean that they should use apiextensions.k8s.io/v1 instead of apiextensions.k8s.io/v1beta1 (for compatibility with Kubernetes 1.22), or that they should define multicluster.admiralty.io/v1 instead of multicluster.admiralty.io/v1alpha1 (due to some policy on your side)?

cert-manager also needs to run on a cluster foo

You might not need cert-manager. What do you typically use on your side for webhook certificates?

Compatibility with Kubernetes 1.22 is planned for this month, hopefully this week.

👍

Do you mean that they should use apiextensions.k8s.io/v1 instead of apiextensions.k8s.io/v1beta1 (for compatibility with Kubernetes 1.22),

Yes.

You might not need cert-manager. What do you typically use on your side for webhook certificates?

We have different teams using different methodologies. Some of them being ocp's tls and ocp’s ca injection via annotation ie:

kind: ValidatingWebhookConfiguration
metadata:
  annotations:
    service.beta.openshift.io/inject-cabundle: "true"

FYI, v0.15.0 was released and supports Kubernetes 1.22 / OpenShift 4.9.