knative-extensions/eventing-kafka-broker

kafka-source-dispatcher pods are not running, they are getting deleted after 1 minute once the statefulset object is first created

Opened this issue · 5 comments

Describe the bug
kafka-source-dispatcher statefulset object is not able to spin up the new pods. It gets deleted immediately after it is first provisioned. Here is the result of kubectl describe kafka-source-dispatcher -n knative-eveting.
image

Expected behavior
kafka-source-dispatcher pods are not spinning up. Here below is the steps that I have done to install knative and knative-extensions.

To Reproduce
Write-Host "Installing serving-crds"
kubectl apply -f https://github.com/knative/eventing/releases/download/knative-v1.16.2/serving-crds.yaml
Write-Host "Installing serving-core"
kubectl apply -f https://github.com/knative/eventing/releases/download/knative-v1.16.2/serving-core.yaml
Write-Host "Patching namespace and hpa for serving deployments"
kubectl label namespace knative-serving istio.io/rev=prod-stable --overwrite
kubectl patch hpa activator -n knative-serving -p '{"spec":{"minReplicas": 2}}'
kubectl patch hpa webhook -n knative-serving -p '{"spec":{"minReplicas": 2}}'
Write-Host "Installing eventing-crds"
kubectl apply -f https://github.com/knative/eventing/releases/download/knative-v1.16.2/eventing-crds.yaml
Write-Host "Installing eventing-core"
kubectl apply -f https://github.com/knative/eventing/releases/download/knative-v1.16.2/eventing-core.yaml
Write-Host "Patching namespace and hpa for eventing deployments"
kubectl label namespace knative-eventing istio.io/rev=prod-stable --overwrite
kubectl patch hpa eventing-webhook -n knative-eventing -p '{"spec":{"minReplicas": 2}}'
Write-Host "Installing net-istio"
kubectl apply -f https://github.com/knative/net-istio/releases/download/knative-v1.16.0/net-istio.yaml
Write-Host "Installing eventing-kafka-controller"
kubectl apply -f https://github.com/knative-extensions/eventing-kafka-broker/releases/download/knative-v1.16.1/eventing-kafka-controller.yaml
Write-Host "Installing eventing-kafka-channel"
kubectl apply -f https://github.com/knative-extensions/eventing-kafka-broker/releases/download/knative-v1.16.1/eventing-kafka-channel.yaml
Write-Host "Installing mt-channel-broker"
kubectl apply -f https://github.com/knative/eventing/releases/download/knative-v1.16.2/mt-channel-broker.yaml
Write-Host "Installing eventing-kafka-broker"
kubectl apply -f https://github.com/knative-extensions/eventing-kafka-broker/releases/download/knative-v1.16.1/eventing-kafka-broker.yaml
Write-Host "Installing eventing-kafka-source and sink"
kubectl apply -f https://github.com/knative-extensions/eventing-kafka-broker/releases/download/knative-v1.16.1/eventing-kafka-source.yaml
kubectl apply -f https://github.com/knative-extensions/eventing-kafka-broker/releases/download/knative-v1.16.1/eventing-kafka-sink.yaml

Knative release version
1.16,2
Additional context
Add any other context about the problem here such as proposed priority
kafka-channel-dispatcher and kafka-broker-dispatcher pods are not coming up. We are getting the error as below:
create Pod kafka-channel-dispatcher-0 in StatefulSet kafka-channel-dispatcher failed error: Pod "kafka-channel-dispatcher-0" is invalid: spec.containers[0].volumeMounts[1].name: Not found: "contract-resources"

Please let me know how to deploy this. I have been blocked with this for a week now.

@pierDipi Any help on this will be highly appreciated . Our team is completely reliant on this and we are blocked being unable to upgrade to the latest version of broker.

@raswinraaj, if before installing the data plane (the files eventing-kafka-broker.yaml or eventing-kafka-source.yaml) you wait for the control plane to be ready, do you still get the error?

The installation expectation is that whenever a new statefulset pod is created, the Knative Kafka webhook is ready and the admission webhook is configured

You need to confirm whether the label in pods.defaulting.webhook.kafka.eventing.knative.dev is configured in the namespace or statefulset:

namespaceSelector:
  matchExpressions:
  - key: webhooks.knative.dev/exclude
    operator: DoesNotExist
  matchLabels:
    kubernetes.io/metadata.name: knative-eventing

It requires the namespace to include kubernetes.io/metadata.name: knative-eventing.

You need to confirm whether the label in pods.defaulting.webhook.kafka.eventing.knative.dev is configured in the namespace or statefulset:

namespaceSelector:
  matchExpressions:
  - key: webhooks.knative.dev/exclude
    operator: DoesNotExist
  matchLabels:
    kubernetes.io/metadata.name: knative-eventing

It requires the namespace to include kubernetes.io/metadata.name: knative-eventing.
Do you mean the knative-eventing namespace ? If yes, I have this label on my namespace already

This is the response
Name: knative-eventing Labels: app.kubernetes.io/name=knative-eventing app.kubernetes.io/version=1.16.2 istio.io/rev=prod-stable kubernetes.io/metadata.name=knative-eventing

@raswinraaj, if before installing the data plane (the files eventing-kafka-broker.yaml or eventing-kafka-source.yaml) you wait for the control plane to be ready, do you still get the error?

The installation expectation is that whenever a new statefulset pod is created, the Knative Kafka webhook is ready and the admission webhook is configured

HI @pierDipi thanks for your reply, do you mean I need to wait for a few minutes after the installation of the eventing-kafka-controller.yaml?