coredns/kubernetai

Can you please expand the README a little?

Closed this issue ยท 15 comments

I got two k8s cluster in the same aws vpc, using aws vpc cni plugin, and I wanted the service discovery piece, but it's not clear in the doc, if I got to enable this on both the k8s cluster, or only the one which has to discover the resources on another.

And are those http endpoints given in your sample config mean the ip for k8s api server?

not clear in the doc, if I got to enable this on both the k8s cluster, or only the one which has to discover the resources on another.

Only on the one which has to discover the resources the other another.

And are those http endpoints given in your sample config mean the ip for k8s api server?

Yes. Kubernetai uses the same options as the kubernetes plugin, which is documented here... https://github.com/coredns/coredns/tree/master/plugin/kubernetes

Hi @chrisohaver
Would you recommend using this project in production?
Is this being used in Techtonics?

BTW: I am planning to use this together with https://github.com/aws/amazon-vpc-cni-k8s

-Thanks

Hi @chrisohaver

At present I am using kops to spin up my test k8s clusters...and I couldn't find a way to change the default internal k8s domain of cluster.local.
kubernetes/kops#6715

Am I correct to assume that unless I figure that out, I can't use kubernetai?

-Thanks

You should be able to use kubernetai on two clusters with the same domain by using the fallthrough option.

Fallthrough in kubernetai will fall-through to the next kubernetai stanza (in the order they are in the Corefile), or to the next plugin (if it's the last kubernetai stanza).

Below is an example Corefile that makes a connection to the local cluster, but also to a remote cluster that uses the same domain name. Because both kubernetai stanzas serve the same zone, queries for cluster.local will always first get processed by the first stanza. The fallthrough in the first stanza allows processing to go to the next stanza if the service is not found in the first.

. {
    errors
    log
    kubernetai cluster.local {
      fallthrough
    }
    kubernetai cluster.local {
      endpoint http://remote-k8s-cluster:8080
    }
}

@johnbelamaric, Thanks, I've added the option, and updated the text to better frame the example a bit in https://github.com/coredns/kubernetai/blob/master/README.md. I also added a paragraph about using Headless Services instead of Cluster IP services. Feel free to tweak and or completely re-write.

Hi @chrisohaver

If I may...I would request you to elaborate on how kubernetai plugin is exposing dns entries for other k8s cluster. And how to secure it?

It uses Kubernetes' client-go libraries to access the API of the other cluster. This connection can use a secured https connection.

Hi @chrisohaver ,

How about mTLS if we have the root CA sitting outside of the two k8s clusters?

-Thanks

Hi @chrisohaver ,

As it's built on top of HTTP and not directly watching etcd events, we have a possibility of client on the other side getting NXDOMAIN errors, while kubernetai is still syncing the new dns entries. Or is this handled with some added ad-hoc pull based mechanisms?

In my case I have a nosql database running in k8s, which spins up new pods fronted with headless services. Now since at the client side, the sdk will discover the new pods over existing service, while it will not be able to resolve the pod ip till kubernetail finishes pulling the dns entries from remote cluster.

-Thanks

During the initial syncing of data (which happens very quickly), CoreDNS reports SERVFAIL, instead of an NXDOMAIN.

How about mTLS if we have the root CA sitting outside of the two k8s clusters?

There is the tls option.

reopen if this is still an issue

looks great @bjethwan! I'll be sure leave the image in place indefinitely.

Thanks!