skynetservices/skydns

Kubernetes sky DNS doesnt work

arjunm183 opened this issue · 9 comments

Hi Team,

I am facing issues with skydns in Kubernetes. Kindly help me in fixing this issue.
Attaching yaml file for svc and rc in skydns

==============SKY DNS SERVICE YAML========================

cat skydns-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.254.0.10
ports:

  • name: dns
    port: 53
    protocol: UDP
  • name: dns-tcp
    port: 53
    protocol: TCP

================SKYDNS RC YAML======================
cat skydns-rc-correct.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v11
namespace: kube-system
labels:
k8s-app: kube-dns
version: v11
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v11
template:
metadata:
labels:
k8s-app: kube-dns
version: v11
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: etcd
image: registry.corpintra.net:5000/etcd-amd64:2.2.1
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
memory: 500Mi
requests:
cpu: 100m
memory: 50Mi
command:
- /usr/local/bin/etcd
- -data-dir
- /var/etcd/data
- -listen-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -advertise-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -initial-cluster-token
- skydns-etcd
volumeMounts:
- name: etcd-storage
mountPath: /var/etcd/data
- name: kube2sky
image: registry.corpintra.net:5000/kube2sky:1.14
env:
- name: "KUBERNETES_RO_SERVICE_HOST"
value: "10.254.0.10"
- name: "KUBERNETES_RO_SERVICE_PORT"
value: "80"
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
# Kube2sky watches all pods.
memory: 200Mi
requests:
cpu: 100m
memory: 50Mi
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 30
timeoutSeconds: 5
args:
# command = "/kube2sky"
- --domain=kubernetes.local
- name: skydns
image: registry.corpintra.net:5000/skydns:2015-10-13-8c72f8c
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 50Mi
args:
# command = "/skydns"
- -machines=http://127.0.0.1:4001
- -addr=0.0.0.0:53
- -ns-rotate=false
- -domain=kubernetes.local.
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- name: healthz
image: registry.corpintra.net:5000/exechealthz:1.0
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
args:
- -cmd=nslookup kube-dns.kube-system.svc.kubernetes.local 127.0.0.1 >/dev/null
- -port=8080
ports:
- containerPort: 8080
protocol: TCP
volumes:
- name: etcd-storage
emptyDir: {}

dnsPolicy: Default # Don't use cluster DNS.

COMMAND :

kubectl cluster-info

Kubernetes master is running at http://localhost:8080
KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns

kubectl --namespace kube-system describe svc kube-dns
Name: kube-dns
Namespace: kube-system
Labels: k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.254.0.10
Port: dns 53/UDP
Endpoints:
Port: dns-tcp 53/TCP
Endpoints:
Session Affinity: None
No events.

If you see here there are no endpoints.

===========KUBELET CONFIG============================

My kubelet configuration as below

cat /etc/kubernetes/kubelet

kubernetes kubelet (minion) config

The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)

KUBELET_ADDRESS="--address=0.0.0.0"

The port for the info server to serve on

KUBELET_PORT="--port=10250"

You may leave this blank to use the actual hostname

KUBELET_HOSTNAME="--hostname-override=52.111.69.161"

location of the api-server

KUBELET_API_SERVER="--api-servers=http://52.111.69.161:8080"

Add your own!

KUBELET_ARGS="--pod-infra-container-image=registry.corpintra.net:5000/pause:2.0 --cluster_dns=10.254.0.10 --cluster_domain=kubernetes.local"

==============POD OUTPUT================================
kubectl --namespace kube-system get pod
NAME READY STATUS RESTARTS AGE
kube-dns-v11-1xtm2 3/4 CrashLoopBackOff 45 2h
kubernetes-dashboard-a0byv 1/1 Running 1 2d

KUBECTL LOGS FOR SKYDNS in the pod shows this error.

kubectl logs po/kube-dns-v11-1xtm2 --namespace kube-system skydns
2016/07/13 05:44:21 skydns: falling back to default configuration, could not read from etcd: 100: Key not found (/skydns) [3]
2016/07/13 05:44:21 skydns: ready for queries on kubernetes.local. for tcp://0.0.0.0:53 [rcache 0]
2016/07/13 05:44:21 skydns: ready for queries on kubernetes.local. for udp://0.0.0.0:53 [rcache 0]

Needed Help.

miekg commented

Sorry. I'm not able to give support for skydns/k8s issues.

Hello Thanks for the reply.

Can you guide me where could i get support on this.

@aledbf I tried using the yaml file referred by you and kubedns is running, however i was getting an error when i execute the below command for collecting logs
How do i test that dns is working fine?

ERROR:
kubectl --namespace kube-system logs kube-dns-v15-sbp6n kubedns
E0714 16:00:57.325984 1 config.go:258] Expected to load root CA config from /var/run/secrets/kubernetes.io/serviceaccount/ca.crt, but got err: open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory
I0714 16:00:57.326212 1 server.go:91] Using https://10.254.0.1:443 for kubernetes master
I0714 16:00:57.326223 1 server.go:92] Using kubernetes API
I0714 16:00:57.326644 1 server.go:132] Starting SkyDNS server. Listening on port:10053
I0714 16:00:57.326774 1 server.go:139] skydns: metrics enabled on :/metrics
I0714 16:00:57.326803 1 dns.go:166] Waiting for service: default/kubernetes
I0714 16:00:57.327385 1 logs.go:41] skydns: ready for queries on kubernetes.local. for tcp://0.0.0.0:10053 [rcache 0]
I0714 16:00:57.327452 1 logs.go:41] skydns: ready for queries on kubernetes.local. for udp://0.0.0.0:10053 [rcache 0]
I0714 16:00:57.426296 1 dns.go:172] Ignoring error while waiting for service default/kubernetes: Get https://10.254.0.1:443/api/v1/namespaces/default/services/kubernetes: x509: failed to load system roots and no roots provided. Sleeping 1s before retrying.
E0714 16:00:57.428254 1 reflector.go:216] pkg/dns/dns.go:154: Failed to list *api.Endpoints: Get https://10.254.0.1:443/api/v1/endpoints?resourceVersion=0: x509: failed to load system roots and no roots provided

@arjunm183 if the file /var/run/secrets/kubernetes.io/serviceaccount/ca.crt is missing the error is that you don't have serviceaccount running correctly. Please check kubernetes/kubernetes#27973

@aledbf: I get the below output when i execute the command
kubectl --server 127.0.0.1:8080 describe serviceaccounts default
Name: default
Namespace: default
Labels:
Mountable secrets: default-token-3zwlo
Tokens: default-token-3zwlo
Image pull secrets:

Looks like i dont have the ca.crt file presented in the kubedns container

However the token in the container and the secrets are same.but there is no ca.crt.

How do i add the ca.crt to the yaml and so that they will replicate in the container.( like volume sharing from host to containers)

Additional logs Below

kubectl describe secret default-token-zbx2p --namespace kube-system
Name: default-token-zbx2p
Namespace: kube-system
Labels:
Annotations: kubernetes.io/service-account.name=default,kubernetes.io/service-account.uid=aede2f13-43c1-11e6-96b3-005056b69c1a

Type: kubernetes.io/service-account-token

Data

namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuLXpieDJwIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJhZWRlMmYxMy00M2MxLTExZTYtOTZiMy0wMDUwNTZiNjljMWEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06ZGVmYXVsdCJ9.eIY0BiKTfRICco6wXEFvRHxGLnT4G9uDZboaxRFfYyHMyxwDEox5reodR8qMk-KkUT4-U5xdM_K_4ZYsc1lq7YSTrgFnCZ-s-tElO7b15qRAVyEHILtZC6kaJ4MPI_75Ne-QnFZER_ycgGLyE7WsTczYO0Ty_NKyczWlEqYwlXnRxGTJMX6zYXMTJ46w2xX7VvSYMtz9CRHs66MsDNtdfgKgLKJkWU_6eE5srt3vJlRP5n2BrR-HPYRMmSzbHfadMJAdFvjB4JPa2uJKBdIq2IdvfWWXec4qjnd1jLmffs9jfB8jxtPmmGzd9eyk9_KS3tn1bDTP64TRVsQ2jAMWyg

host1:~/kube-system/deploy/skydns # kubectl --namespace kube-system exec kube-dns-v15-sbp6n -- cat /var/run/secrets/kubernetes.io/serviceaccount/token
eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuLXpieDJwIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJhZWRlMmYxMy00M2MxLTExZTYtOTZiMy0wMDUwNTZiNjljMWEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06ZGVmYXVsdCJ9.eIY0BiKTfRICco6wXEFvRHxGLnT4G9uDZboaxRFfYyHMyxwDEox5reodR8qMk-KkUT4-U5xdM_K_4ZYsc1lq7YSTrgFnCZ-s-tElO7b15qRAVyEHILtZC6kaJ4MPI_75Ne-QnFZER_ycgGLyE7WsTczYO0Ty_NKyczWlEqYwlXnRxGTJMX6zYXMTJ46w2xX7VvSYMtz9CRHs66MsDNtdfgKgLKJkWU_6eE5srt3vJlRP5n2BrR-HPYRMmSzbHfadMJAdFvjB4JPa2uJKBdIq2IdvfWWXec4qjnd1jLmffs9jfB8jxtPmmGzd9eyk9_KS3tn1bDTP64TRVsQ2jAMWyg

@arjunm183 please check the procedure you followed to create the certificates for k8s.
You can use this guide to check what's missing.

@arjunm183 just as a suggestion (as a kubernetes user) please close this issue
This is not related to skydns and it can generate more traffic from search engines (serviceaccount troubles are a common thing)