istio/old_issues_repo

When deploying using Helm from Master get CrashLoopback for Policy and Telemetry

Opened this issue · 0 comments

Is this a BUG or FEATURE REQUEST?:

BUG

Did you review https://istio.io/help/ and existing issues to identify if this is already solved or being worked on?:

Yes

Bug:
Y

What Version of Istio and Kubernetes are you using, where did you get Istio from, Installation details

Istio version Master 0.8.0
kubectl version 1.9.5

Is Istio Auth enabled or not ?
Did you install the stable istio.yaml, istio-auth.yaml.... or if using the Helm chart please provide full command line input.
Installed using helm as follows

helm install istio-master/install/kubernetes/helm/istio --name istio --namespace=istio-system --set sidecar-injector.enabled=true --set global.proxy.image=proxyv2

Also tried adding --set global.mtls.enabled=false since I saw this error comes up with MTLS

What happened:

kubectl get po --all-namespaces 
NAMESPACE      NAME                                                                  READY     STATUS             RESTARTS   AGE
istio-system   istio-citadel-584cfd5d56-2vp9s                                        1/1       Running            0          11m
istio-system   istio-egressgateway-7c575bf7f8-qc8ng                                  1/1       Running            0          11m
istio-system   istio-ingress-6888bd7848-brmn5                                        1/1       Running            0          11m
istio-system   istio-ingressgateway-6c678bb8-dwqvg                                   1/1       Running            0          11m
istio-system   istio-pilot-75dc6f7d5-459vc                                           2/2       Running            0          11m
istio-system   istio-policy-855bcc896c-snzwk                                         1/2       CrashLoopBackOff   7          11m
istio-system   istio-sidecar-injector-68df48d647-h7kmr                               1/1       Running            0          11m
istio-system   istio-statsd-prom-bridge-6dbb7dcc7f-rf2cw                             1/1       Running            0          11m
istio-system   istio-telemetry-54d4d7d5ff-znvt5                                      1/2       CrashLoopBackOff   7          11m

Looking at logs:

kubectl logs deployment/istio-policy istio-proxy --namespace=istio-system
2018-05-22T15:42:37.584505Z	info	Version circleci@79c494ea6b3c-docker.io/istio-2ac5ccdaa3fcd5dda5b3874b81e7ed7f33ffb80f-2ac5ccdaa3fcd5dda5b3874b81e7ed7f33ffb80f-Clean
2018-05-22T15:42:37.584559Z	info	Proxy role: model.Proxy{ClusterID:"", Type:"sidecar", IPAddress:"10.230.64.220", ID:"istio-policy-855bcc896c-snzwk.istio-system", Domain:"istio-system.svc.cluster.local", Metadata:map[string]string(nil)}
2018-05-22T15:42:37.584829Z	info	Effective config: binaryPath: /usr/local/bin/envoy
configPath: /etc/istio/proxy
connectTimeout: 1.000s
discoveryAddress: istio-pilot:15007
discoveryRefreshDelay: 1.000s
drainDuration: 2.000s
parentShutdownDuration: 3.000s
proxyAdminPort: 15000
serviceCluster: istio-policy

2018-05-22T15:42:37.584854Z	info	Monitored certs: []v1.CertSource{v1.CertSource{Directory:"/etc/certs/", Files:[]string{"cert-chain.pem", "key.pem", "root-cert.pem"}}}
2018-05-22T15:42:37.585064Z	info	Static config:
admin:
  access_log_path: /dev/stdout
  address:
    socket_address:
      address: 127.0.0.1
      port_value: 15000
static_resources:
  clusters:
  - circuit_breakers:
      thresholds:
      - max_connections: 100000
        max_pending_requests: 100000
        max_requests: 100000
        max_retries: 3
    connect_timeout: 1.000s
    hosts:
    - socket_address:
        address: 127.0.0.1
        port_value: 9091
    http2_protocol_options: {}
    name: in.9091
  - connect_timeout: 1.000s
    hosts:
    - socket_address:
        address: zipkin
        port_value: 9411
    name: zipkin
    type: STRICT_DNS
  - circuit_breakers:
      thresholds:
      - max_connections: 100000
        max_pending_requests: 100000
        max_requests: 100000
        max_retries: 3
    connect_timeout: 1.000s
    hosts:
    - socket_address:
        address: istio-telemetry
        port_value: 15004
    http2_protocol_options: {}
    name: mixer_report_server
    type: STRICT_DNS
  listeners:
  - address:
      socket_address:
        address: 0.0.0.0
        port_value: 15004
    filter_chains:
    - filters:
      - config:
          access_log:
          - config:
              path: /dev/stdout
            name: envoy.file_access_log
          codec_type: HTTP2
          generate_request_id: true
          http_filters:
          - config:
              default_destination_service: istio-policy.istio-system.svc.cluster.local
              service_configs:
                istio-policy.istio-system.svc.cluster.local:
                  disable_check_calls: true
                  mixer_attributes:
                    attributes:
                      destination.service:
                        string_value: istio-policy.istio-system.svc.cluster.local
                      destination.uid:
                        string_value: kubernetes://istio-policy-855bcc896c-snzwk.istio-system
              transport:
                check_cluster: mixer_check_server
                report_cluster: mixer_report_server
            name: mixer
          - name: envoy.router
          route_config:
            name: "15004"
            virtual_hosts:
            - domains:
              - '*'
              name: istio-policy.istio-system.svc.cluster.local
              routes:
              - decorator:
                  operation: Report
                match:
                  prefix: /
                route:
                  cluster: in.9091
                  timeout: 0.000s
          stat_prefix: "15004"
          tracing: {}
        name: envoy.http_connection_manager
    name: "15004"
tracing:
  http:
    config:
      collector_cluster: zipkin
      collector_endpoint: /api/v1/spans
    name: envoy.zipkin

2018-05-22T15:42:37.585501Z	info	Starting proxy agent
.
.
.
[2018-05-22 15:42:42.603][18][info][config] external/envoy/source/server/listener_manager_impl.cc:602] all dependencies initialized. starting workers
[2018-05-22 15:42:45.604][18][info][main] external/envoy/source/server/drain_manager_impl.cc:63] shutting down parent after drain
2018-05-22T15:43:07.586151Z	info	Unable to retrieve availability zone from pilot: Get http://istio-pilot:15007/v1/az/istio-policy/sidecar~10.230.64.220~istio-policy-855bcc896c-snzwk.istio-system~istio-system.svc.cluster.local: dial tcp 10.230.7.18:15007: i/o timeout 0
2018-05-22T15:43:37.591928Z	info	Unable to retrieve availability zone from pilot: <nil> 1
2018-05-22T15:44:07.596203Z	info	Unable to retrieve availability zone from pilot: <nil> 2
2018-05-22T15:44:37.599758Z	info	Unable to retrieve availability zone from pilot: <nil> 3
2018-05-22T15:45:07.603952Z	info	Unable to retrieve availability zone from pilot: <nil> 4
2018-05-22T15:45:37.608397Z	info	Unable to retrieve availability zone from pilot: <nil> 5
2018-05-22T15:46:07.612400Z	info	Unable to retrieve availability zone from pilot: <nil> 6
2018-05-22T15:46:37.616149Z	info	Unable to retrieve availability zone from pilot: <nil> 7
2018-05-22T15:47:07.620387Z	info	Unable to retrieve availability zone from pilot: <nil> 8
2018-05-22T15:47:37.624941Z	info	Unable to retrieve availability zone from pilot: <nil> 9
2018-05-22T15:48:07.629210Z	info	Unable to retrieve availability zone from pilot: <nil> 10
2018-05-22T15:48:07.629264Z	error	Failed to connect to pilot. Fallback to starting with defaults and no AZ AZ error 404

The log for istio-telemetry is similar

What you expected to happen:
Istio-policy and Istio-telemetry should be in Running state like the rest

How to reproduce it:
Deploy using Helm following latest instructions