knative-extensions/net-istio

The Installation of the Knative Istio controller aborts on a private cluster.

Closed this issue · 5 comments

Describe the bug
The Installation of the Knative Istio controller aborts on a private cluster.

Error from server (InternalError): error when creating "https://github.com/knative/net-istio/releases/download/v0.21.0/net-istio.yaml": Internal error occurred: failed calling webhook "config.webhook.serving.knative.dev": Post https://webhook.knative-serving.svc:443/config-validation?timeout=10s: dial tcp 10.20.2.5:8443: i/o timeout

The networking-istio Pod fails with:
Failed to start configuration manager

Expected behavior
Installation process works

To Reproduce
Steps to reproduce the behavior.
1.) create cluster:

gcloud container clusters create private-cluster-1 \
    --create-subnetwork name=my-subnet-1 \
    --enable-master-authorized-networks \
    --enable-ip-alias \
    --enable-private-nodes \
    --master-authorized-networks="$(curl -s https://icanhazip.com/)/32" \
    --master-ipv4-cidr 172.16.0.0/28 \
    --machine-type=n2-standard-2 --max-nodes=3 --min-nodes=1

2.)
install istio conforming https://istio.io/latest/docs/setup/install/istioctl/ Verion 1.9.1

istioctl install

kubectl apply --filename https://github.com/knative/serving/releases/download/v0.21.0/serving-crds.yaml
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.21.0/serving-core.yaml

kubectl apply --filename https://github.com/knative/net-istio/releases/download/v0.21.0/net-istio.yaml

Error from server (InternalError): error when creating "https://github.com/knative/net-istio/releases/download/v0.21.0/net-istio.yaml": Internal error occurred: failed calling webhook "config.webhook.serving.knative.dev": Post https://webhook.knative-serving.svc:443/config-validation?timeout=10s: dial tcp 10.20.2.5:8443: i/o timeout

Knative-GCP release version
v0.21.0

Additional context
If the cluster is not private the installation works

I think it is platform specific and I need to create a firewall rule..
elastic/cloud-on-k8s#1437

I works if I open all ports...I will try to find out which ports must be opened in details..

I will try to find out which ports must be opened in details..

The Kubernetes API server must be able to reach the Knative Webhooks.
They are listening on 8883.

A better approach is too have them listen on a port already allowed by the default firewall rules: 443 (requires to run as root) or 10250. This requires patching Knative release YAML.

@JRBANCEL

Thanks for your help.
I opened port 8883 (and 8443 because of nginx ingress) instead of *

Now it works. BUT:
-The deployment process (hellpworld-go) is much slower than with all ports open (until the revision gets active).
-running the knavite-istio controller installation template https://github.com/knative/net-istio/releases/download/v0.21.0/net-istio.yaml. is 10 times slower than with alle ports open.

  1. There must be traffic which is blocked an slows the processes down due to timeouts. How could I investigate it??

  2. In which file I can patch the listen port from 8883 to 443? I did not find it in my installation files:

# install istio

cd /Users/sklose/Downloads/istio-1.9.1
export PATH=$PWD/bin:$PATH
istioctl install --set profile=default -y

# install knative serving

kubectl apply --filename https://github.com/knative/serving/releases/download/v0.21.0/serving-crds.yaml
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.21.0/serving-core.yaml

kubectl apply --filename https://github.com/knative/net-istio/releases/download/v0.21.0/net-istio.yaml

# install knative  eventing

kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.21.0/eventing-crds.yaml
kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.21.0/eventing-core.yaml

kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.21.0/in-memory-channel.yaml
kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.21.0/mt-channel-broker.yaml

# install knative  knative-gcp

export KGCP_VERSION=v0.21.0
kubectl create namespace cloud-run-events
kubectl apply --filename https://github.com/google/knative-gcp/releases/download/${KGCP_VERSION}/cloud-run-events-pre-install-jobs.yaml

kubectl apply --selector events.cloud.google.com/crd-install=true \
--filename https://github.com/google/knative-gcp/releases/download/${KGCP_VERSION}/cloud-run-events.yaml

kubectl apply --filename https://github.com/google/knative-gcp/releases/download/${KGCP_VERSION}/cloud-run-events.yaml

I opened the port: Source --master-ipv4-cidr (172.16.0.16/28) Target: ServiceAccount
If I add the port to the generated firewall rule k8s-fw-* with source 0.0.0.0/0 and target tag gke-private-cluster-1-e9fe715e-node the problem seems to be solved (creation is 10 faster).