[BUG] Script fails at times with this error. Need to debug and fix. Possibly with wait time between commands.
Opened this issue ยท 1 comments
dushyantbehl commented
Logs ->
==> Checking if prerequisites are met
==> Everything is good :-)
==> Delete kind clusters
/home/dushyant/work/multi-cloud-research/bin/kind-v0.18.0 delete cluster --name east
Deleting cluster "east" ...
/home/dushyant/work/multi-cloud-research/bin/kind-v0.18.0 delete cluster --name west
Deleting cluster "west" ...
==> Create kind clusters
docker kill proxy;docker rm proxy; docker run -d --name proxy --restart=always --net=kind -e REGISTRY_PROXY_REMOTEURL=https://registry-1.docker.io registry:2
proxy
proxy
3feae1bce64c6098ab4b8f522f1df7da259a2e5add3a0e3fcf768b53a24fe954
/home/dushyant/work/multi-cloud-research/bin/kind-v0.18.0 create cluster --name east --config contrib/kind/kindeastconfig.yaml
Creating cluster "east" ...
โ Ensuring node image (kindest/node:v1.26.3) ๐ผ
โ Preparing nodes ๐ฆ ๐ฆ
โ Writing configuration ๐
โ Starting control-plane ๐น๏ธ
โ Installing StorageClass ๐พ
โ Joining worker nodes ๐
Set kubectl context to "kind-east"
You can now use your cluster with:
kubectl cluster-info --context kind-east
Have a nice day! ๐
kubectl config use-context kind-east
Switched to context "kind-east".
kubectl create namespace east
namespace/east created
kubectl config set-context --current --namespace=east
Context "kind-east" modified.
/home/dushyant/work/multi-cloud-research/bin/kind-v0.18.0 create cluster --name west --config contrib/kind/kindwestconfig.yaml
Creating cluster "west" ...
โ Ensuring node image (kindest/node:v1.26.3) ๐ผ
โ Preparing nodes ๐ฆ ๐ฆ
โ Writing configuration ๐
โ Starting control-plane ๐น๏ธ
โ Installing StorageClass ๐พ
โ Joining worker nodes ๐
Set kubectl context to "kind-west"
You can now use your cluster with:
kubectl cluster-info --context kind-west
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community ๐
kubectl config use-context kind-west
Switched to context "kind-west".
kubectl create namespace west
namespace/west created
kubectl config set-context --current --namespace=west
Context "kind-west" modified.
docker network inspect -f '{{.IPAM.Config}}' kind
[{172.18.0.0/16 172.18.0.1 map[]} {fc00:f853:ccd:e793::/64 fc00:f853:ccd:e793::1 map[]}]
==> Deploy calico cni
kubectl config use-context kind-east
Switched to context "kind-east".
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/tigera-operator.yaml
namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created
kubectl wait --namespace tigera-operator --for=condition=ready pod --selector=name=tigera-operator --timeout=180s
pod/tigera-operator-5d6845b496-b7v7s condition met
kubectl create -f contrib/calico/calicoeastconfig.yaml
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created
kubectl config use-context kind-west
Switched to context "kind-west".
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/tigera-operator.yaml
namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created
kubectl wait --namespace tigera-operator --for=condition=ready pod --selector=name=tigera-operator --timeout=180s
error: no matching resources found
make: *** [.mk/kind.mk:27: deploy-cni] Error 1
eranra commented
@dushyantbehl I saw that @KalmanMeth extended the timeout --- can you see in https://github.com/netobserv/multi-cloud-research/pulls maybe this is a problem of just not waiting enough?