magneticio/vamp-ui

Deployment reports as failed, but health is 100% and I am able to access it by instance and gateway

Opened this issue · 4 comments

@gggina please add all relevant debugging info: versions of all used components, steps to reproduce etc. tnx

I'm running on Docker for mac
Deployed from the sava:1.0 blueprint. No other steps.
Versions:

RUNNING SINCE	23-03-2017 13:48:25
VERSION	katana
UI VERSION	0.9.3-78-gc8723c0
PERSISTENCE	zookeeper
PULSE	elasticsearch
KEY VALUE STORE	zookeeper
GATEWAY DRIVER	haproxy
CONTAINER DRIVER	marathon
WORKFLOW DRIVER	marathon, chronos

Log:

14:41:53.720	ERROR	io.vamp.common.notification.Notification	Deployment service error for deployment 'sava:1.0' and service 'sava:1.0.0'.

Deployment source:

name: sava:1.0
kind: deployment
metadata: {}
lookup_name: 6528060b3dd511a46aa9ef048fbfc27505d2a970
clusters:
  sava:
    metadata: {}
    services:
    - status:
        intention: Deployment
        since: '2017-03-23T13:41:53.725Z'
        phase:
          name: Failed
          since: '2017-03-23T13:41:53.725Z'
          notification: Deployment service error was not restored. Deployment service error for deployment 'sava:1.0' and service 'sava:1.0.0'.
      breed:
        name: sava:1.0.0
        kind: breed
        metadata: {}
        deployable:
          definition: magneticio/sava:1.0.0
        ports:
          webport: 8080/http
          jsonport: 8081/http
        environment_variables:
          SAVA_DEBUG: 'true'
        constants: {}
        arguments: []
        dependencies: {}
      environment_variables:
        SAVA_DEBUG: 'true'
      scale:
        cpu: 0.2
        memory: 64.00MB
        instances: 1
      instances:
      - name: vamp_deployment-6528060b3dd511a46aa9ef048fbfc27505d2a970-service-f6688d352a6f996f10f41a736bf661babec6a45f.0c7de66b-0fc9-11e7-a48b-0242ac110002
        host: 192.168.65.2
        ports:
          webport: 31619
          jsonport: 31620
        deployed: true
        metadata: {}
      arguments:
      - privileged: 'true'
      health_checks:
      - path: /
        port: webport
        initial_delay: 5s
        timeout: 5s
        interval: 5s
        failures: 5
        protocol: HTTP
      dependencies: {}
      dialects: {}
      health:
        staged: 0
        running: 1
        healthy: 0
        unhealthy: 0
    gateways:
      webport:
        sticky: null
        virtual_hosts:
        - webport.sava.sava-1-0.vamp
        routes:
          sava:1.0.0:
            lookup_name: 2ae7483f14e4cfc6433ee05c18ea191663a1f915
            weight: 100%
            balance: default
            condition: null
            condition_strength: 0%
            rewrites: []
      jsonport:
        sticky: null
        virtual_hosts:
        - jsonport.sava.sava-1-0.vamp
        routes:
          sava:1.0.0:
            lookup_name: 3ed1ecd359efe3489b70ed8fb58846e315de7502
            weight: 100%
            balance: default
            condition: null
            condition_strength: 0%
            rewrites: []
    dialects: {}
ports:
  sava.webport: '40001'
  sava.jsonport: '40000'
environment_variables:
  sava.SAVA_DEBUG: 'true'
hosts:
  sava: 192.168.65.2
dialects: {}

Displayed status is aligned with data coming from server, so this is not a UI bug - maybe not a bug at all. It's possible to have service in failed state and still it can (partially) run.

This also the case when a service is in "deploying" state, the health is shown as 100% and one can click into the instance view and see 100% health (green), but no instance (as it's not running (yet)). in DCOS UI its not possible to enter this deployment yet, which is the correct behaviour as it not done yet. I expect to see no health data yet as long as it's deploying, and preferably also a "state" info in the instances screen where it says "deploying" or "waiting" or something like that.