Running pods do not display webpage
Urgen-Dorjee opened this issue · 1 comments
Well I reached pretty much towards the end of the project outlines however I have been stuck almost at the finish line. I have configured all containers to Kubernetes and it works greatly, and finally trying to configure istio but somehow stuck and not sure what is the problem. I honestly followed all your steps as mentioned in wiki page and I choose ('method 2") installing istio and currently i have version 1.2.5 installed.
kubectl get pods -namespace
NAME READY STATUS RESTARTS AGE
auditlogservicemanager-77bbf6795b-w5tpk 1/1 Running 0 12m
customerapi-v1-597774488d-7xcwv 1/2 Running 0 12m
invoiceservice-b8ff48f89-jz86l 1/1 Running 0 12m
logserver-758d4598f6-g8b4x 1/1 Running 0 12m
mailserver-64444466b5-zpcmc 1/1 Running 0 12m
notificationservicemanager-5f57697696-gc5wf 1/1 Running 0 12m
rabbitmq-66845d69cb-57dm7 1/1 Running 0 12m
sqlserver-7866c795f9-8vv9l 1/1 Running 0 12m
timeservice-5b67f48845-c5p4n 1/1 Running 0 12m
vehiclemanagementapi-7fbff9cbbf-llpx8 1/2 Running 0 12m
web-78794b65cb-z58wh 1/2 Running 0 12m
workshopeventhandler-77db84d66d-9r8gv 1/1 Running 0 12m
workshopmanagementapi-777447ddd8-vtc5w 0/2 CrashLoopBackOff 4 12m
All of the pods are running except workshopmenagementapi getting crashed over and over again.
I am kind of confused here if you look at all the running pods except following has more than 1 pod but when i looked at your wiki page screen shot it has showing only 1/1 though all the configurations and settings are same as I followed exactly same steps as you mentioned.
NAME READY STATUS RESTARTS AGE
customerapi-v1-597774488d-7xcwv 1/2 Running 0 12m
vehiclemanagementapi-7fbff9cbbf-llpx8 1/2 Running 0 12m
web-78794b65cb-z58wh 1/2 Running 0 12m
workshopmanagementapi-777447ddd8-vtc5w 0/2 CrashLoopBackOff 4 12m
Problem descriptions:
When I test the application as stated in the wiki even though it was running fine in kubernetes clusters. None of pages are getting displayed basically istio can't display the localhost rather giving me following standard error page:
this page can't be reached.
therefore I tried to check the logs of these running pods and I have been able to extract following errors and I am not sure whether this is exactly the problem in relate to an error mentioned above :
kubectl describe pod -{podname} -namespace
Name: web-78794b65cb-z58wh
Namespace: auto
Priority: 0
PriorityClassName: <none>
Node: docker-desktop/192.168.65.3
Start Time: Thu, 05 Sep 2019 14:35:05 +0530
Labels: app=web
pod-template-hash=78794b65cb
system=auto
version=v1
Annotations: sidecar.istio.io/status:
{"version":"761ebc5a63976754715f22fcf548f05270fb4b8db07324894aebdb31fa81d960","initContainers":["istio-init"],"containers":["istio-proxy"]...
Status: Running
IP: 10.1.2.72
Controlled By: ReplicaSet/web-78794b65cb
Init Containers:
istio-init:
Container ID: docker://a2341e6ac0f02537cbc5a47b6e23eee0a468c9d5edbfb5d0e832f8cef5343358
Image: gcr.io/istio-release/proxy_init:release-1.2-latest-daily
Image ID: docker-pullable://gcr.io/istio-release/proxy_init@sha256:129db113aadd8723e2cf80e1b1665cd404af57af563a49c880a56692688e07d9
Port: <none>
Host Port: <none>
Args:
-p
15001
-u
1337
-m
REDIRECT
-i
*
-x
-b
7000
-d
15020
State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 05 Sep 2019 14:35:18 +0530
Finished: Thu, 05 Sep 2019 14:35:20 +0530
Ready: True
Restart Count: 0
Limits:
cpu: 100m
memory: 50Mi
Requests:
cpu: 10m
memory: 10Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-7jgkh (ro)
Containers:
web:
Container ID: docker://0f4399c834e38c97096c55031f0f13b4b4ce709221657b9497dcf0e7e4bc0738
Image: web:latest
Image ID: docker://sha256:1be5e1876e9c8167dceb6d0fed0047fbc2f4b972d18408e3f985b6004cb82340
Port: 7000/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 05 Sep 2019 14:35:27 +0530
Ready: True
Restart Count: 0
Environment:
ASPNETCORE_ENVIRONMENT: Production
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-7jgkh (ro)
istio-proxy:
Container ID: docker://d13f250851045fcdc586c39358a47e58ba01907fa8e4bf9ed2b729270c774f57
Image: gcr.io/istio-release/proxyv2:release-1.2-latest-daily
Image ID: docker-pullable://gcr.io/istio-release/proxyv2@sha256:750b3fa0400c74f9bad0b4ed19255b16913be8eb1693e7fabb6630ce4ef0a93b
Port: 15090/TCP
Host Port: 0/TCP
Args:
proxy
sidecar
--domain
$(POD_NAMESPACE).svc.cluster.local
--configPath
/etc/istio/proxy
--binaryPath
/usr/local/bin/envoy
--serviceCluster
web.$(POD_NAMESPACE)
--drainDuration
45s
--parentShutdownDuration
1m0s
--discoveryAddress
istio-pilot.istio-system:15010
--zipkinAddress
zipkin.istio-system:9411
--dnsRefreshRate
300s
--connectTimeout
10s
--proxyAdminPort
15000
--concurrency
2
--controlPlaneAuthPolicy
NONE
--statusPort
15020
--applicationPorts
7000
State: Running
Started: Thu, 05 Sep 2019 14:35:33 +0530
Ready: False
Restart Count: 0
Limits:
cpu: 2
memory: 1Gi
Requests:
cpu: 100m
memory: 128Mi
Readiness: http-get http://:15020/healthz/ready delay=1s timeout=1s period=2s #success=1 #failure=30
Environment:
POD_NAME: web-78794b65cb-z58wh (v1:metadata.name)
POD_NAMESPACE: auto (v1:metadata.namespace)
INSTANCE_IP: (v1:status.podIP)
ISTIO_META_POD_NAME: web-78794b65cb-z58wh (v1:metadata.name)
ISTIO_META_CONFIG_NAMESPACE: auto (v1:metadata.namespace)
ISTIO_META_INTERCEPTION_MODE: REDIRECT
ISTIO_META_INCLUDE_INBOUND_PORTS: 7000
ISTIO_METAJSON_LABELS: {"app":"web","system":"auto","version":"v1"}
Mounts:
/etc/certs/ from istio-certs (ro)
/etc/istio/proxy from istio-envoy (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-7jgkh (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
istio-envoy:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: <unset>
istio-certs:
Type: Secret (a volume populated by a Secret)
SecretName: istio.default
Optional: true
default-token-7jgkh:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-7jgkh
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 23m default-scheduler Successfully assigned auto/web-78794b65cb-z58wh to docker-desktop
Normal Pulled 23m kubelet, docker-desktop Container image "gcr.io/istio-release/proxy_init:release-1.2-latest-daily" already present on machine
Normal Created 23m kubelet, docker-desktop Created container istio-init
Normal Started 23m kubelet, docker-desktop Started container istio-init
Normal Pulled 23m kubelet, docker-desktop Container image "web:latest" already present on machine
Normal Created 23m kubelet, docker-desktop Created container web
Normal Started 23m kubelet, docker-desktop Started container web
Normal Pulled 23m kubelet, docker-desktop Container image "gcr.io/istio-release/proxyv2:release-1.2-latest-daily" already present on machine
Normal Created 22m kubelet, docker-desktop Created container istio-proxy
Normal Started 22m kubelet, docker-desktop Started container istio-proxy
Warning Unhealthy 22m kubelet, docker-desktop Readiness probe failed: Get http://10.1.2.72:15020/healthz/ready: dial tcp 10.1.2.72:15020: connect: connection refused
Warning Unhealthy 3m12s (x590 over 22m) kubelet, docker-desktop Readiness probe failed: HTTP probe failed with statuscode: 503
Therefore I am not quite sure as I have checked istio and kubernetes docs as well as stackoverflow but could not get the idea, and if you refer istio and kubernetes docs they are really painful most of them are bash commands I hardly seen any windows commands.
Therefore i would ask did i miss any settings or configurations which i amnot aware of even though i have gone through your wiki.
Thanks!
Hi @Urgen-Dorjee. I've reset my Docker installation to factory defaults and followed the wiki to set everything up from scratch. I also upgraded my istio and helm installations to the latest version (istio 1.2.5). Everything starts-up fine:
Also, the Pitstop web-application is operational and I can access everything fine from the browser.
I have only 1 replica running of all my pods. I don't know why you have some of them running multiple replicas. This must be configured in the yaml config-file for these services. Please double-check this.
Also, did you rebuild the docker-images after pulling the latest version of the repo?
Based on your info, I'm not able to see what went wrong. So I'm sorry, but I'm not able to help you with this issue unfortunately.