redhat-developer-demos/knative-tutorial

failing in the setup phase of the knative-tutorial

wkremser opened this issue · 5 comments

Describe the bug
i tried to go through the knative tutorial but i am already failing in the setup portion of it.

i already described this in the chat of the knative masterclass today. here ist a session recording of my 3rd installation attempt (recorded with script on macos 10.15.4)

with regards
Wolfgang "DJ Madie" Kremser

To Reproduce
please check the attached file for this

Expected behavior
successfully completing the setup steps of the tutorial

Desktop (please complete the following information):

  • OS: MacOS Catalina
  • kubectl: v1.18.3
  • minikube: v1.11.0

Additional context
attached to this is my session transcript.

[knative-tutorial-installation.txt]
(https://github.com/redhat-developer-demos/knative-tutorial/files/4760162/knative-tutorial-installation.txt)

@wkremser - do you mean the crashing knative-serving pods ?? from you script I also see you use incomptaible kubectl - I mean using 1.18 against 1.15 cluster.

can you please describe the controller and webhook pods from knative-serving and share the output?

Hi @kameshsampath, yes i mean that all pods are crashing. i was trying this tutorial during yesterdays presentation from @burrsutter.

@sebastienblanc asked me to create an issue here.

the newer versions of kubectl and/or minikube could be the problem here.

here is the output:

kubectl describe pods controller-8564567c4c-bzt5j -n knative-serving
Name: controller-8564567c4c-bzt5j
Namespace: knative-serving
Priority: 0
Node: knativetutorial/192.168.64.4
Start Time: Wed, 10 Jun 2020 19:17:49 +0200
Labels: app=controller
pod-template-hash=8564567c4c
serving.knative.dev/release=v0.14.0
Annotations: cluster-autoscaler.kubernetes.io/safe-to-evict: true
Status: Running
IP: 172.17.0.6
IPs:
Controlled By: ReplicaSet/controller-8564567c4c
Containers:
controller:
Container ID: docker://5ecb6f92e1324f0b7a6226688105d14b33d5fbc718857e38c2dab0013e5cd428
Image: gcr.io/knative-releases/knative.dev/serving/cmd/controller@sha256:71f7c9f101e7e30e82a86d203fb98d6fa607c8d6ac2fcb73fd1defd365795223
Image ID: docker-pullable://gcr.io/knative-releases/knative.dev/serving/cmd/controller@sha256:71f7c9f101e7e30e82a86d203fb98d6fa607c8d6ac2fcb73fd1defd365795223
Ports: 9090/TCP, 8008/TCP
Host Ports: 0/TCP, 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 11 Jun 2020 10:04:04 +0200
Finished: Thu, 11 Jun 2020 10:04:04 +0200
Ready: False
Restart Count: 178
Limits:
cpu: 1
memory: 1000Mi
Requests:
cpu: 100m
memory: 100Mi
Environment:
SYSTEM_NAMESPACE: knative-serving (v1:metadata.namespace)
CONFIG_LOGGING_NAME: config-logging
CONFIG_OBSERVABILITY_NAME: config-observability
METRICS_DOMAIN: knative.dev/internal/serving
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from controller-token-x6xv8 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
controller-token-x6xv8:
Type: Secret (a volume populated by a Secret)
SecretName: controller-token-x6xv8
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Normal Pulled 50m (x169 over 14h) kubelet, knativetutorial Container image "gcr.io/knative-releases/knative.dev/serving/cmd/controller@sha256:71f7c9f101e7e30e82a86d203fb98d6fa607c8d6ac2fcb73fd1defd365795223" already present on machine
Warning BackOff 15s (x4100 over 14h) kubelet, knativetutorial Back-off restarting failed container

kubectl describe pods webhook-7fbf9c6d49-l79px -n knative-serving
Name: webhook-7fbf9c6d49-l79px
Namespace: knative-serving
Priority: 0
Node: knativetutorial/192.168.64.4
Start Time: Wed, 10 Jun 2020 19:17:49 +0200
Labels: app=webhook
pod-template-hash=7fbf9c6d49
role=webhook
serving.knative.dev/release=v0.14.0
Annotations: cluster-autoscaler.kubernetes.io/safe-to-evict: false
Status: Running
IP: 172.17.0.7
IPs:
Controlled By: ReplicaSet/webhook-7fbf9c6d49
Containers:
webhook:
Container ID: docker://19e50b4947898ea1cac07588fd788fd2b42e4ce48de949ea2780d726c69fe1a8
Image: gcr.io/knative-releases/knative.dev/serving/cmd/webhook@sha256:90562a10f5e37965f4f3332b0412afec1cf3dd1c06caed530213ca0603e52082
Image ID: docker-pullable://gcr.io/knative-releases/knative.dev/serving/cmd/webhook@sha256:90562a10f5e37965f4f3332b0412afec1cf3dd1c06caed530213ca0603e52082
Ports: 9090/TCP, 8008/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 11 Jun 2020 10:08:39 +0200
Finished: Thu, 11 Jun 2020 10:08:40 +0200
Ready: False
Restart Count: 179
Limits:
cpu: 200m
memory: 200Mi
Requests:
cpu: 20m
memory: 20Mi
Environment:
SYSTEM_NAMESPACE: knative-serving (v1:metadata.namespace)
CONFIG_LOGGING_NAME: config-logging
CONFIG_OBSERVABILITY_NAME: config-observability
METRICS_DOMAIN: knative.dev/serving
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from controller-token-x6xv8 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
controller-token-x6xv8:
Type: Secret (a volume populated by a Secret)
SecretName: controller-token-x6xv8
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Normal Pulled 16m (x176 over 14h) kubelet, knativetutorial Container image "gcr.io/knative-releases/knative.dev/serving/cmd/webhook@sha256:90562a10f5e37965f4f3332b0412afec1cf3dd1c06caed530213ca0603e52082" already present on machine
Warning BackOff 105s (x4096 over 14h) kubelet, knativetutorial Back-off restarting failed container

@wkremser - sometimes the pods can crash if we have some issues with grc.io, can you try a fresh install ? Also please make sure the kubectl you are using is inline with server e.g. 1.16 in client then 1.16 in server as well..

unable to reproduce.