orchestracities/charts

Orion Helm Chart Deployment Fails

Closed this issue · 1 comments

Trying to run Orion Helm Chart by running helm install orion oc/orion

The response is as follows showing that the install was nominal:

NAME: orion
LAST DEPLOYED: Mon Feb 15 18:14:51 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=orion,release=orion" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl port-forward $POD_NAME 8080:1026

However, when running kubectl get all the response is as follows, indicating an error of the Pod's deployment:

NAME                               READY   STATUS                  RESTARTS   AGE
pod/orion-orion-64559b77fb-2x77w   0/1     Init:CrashLoopBackOff   6          10m
pod/orion-orion-64559b77fb-lkmf8   0/1     Init:CrashLoopBackOff   6          9m21s

NAME                  TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/orion-orion   ClusterIP   None         <none>        80/TCP    10m

NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/orion-orion   0/2     2            0           10m

NAME                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/orion-orion-64559b77fb   2         2         0       10m

NAME                                              REFERENCE                TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/orion-orion   Deployment/orion-orion   <unknown>/50%   2         6         2          10m

Also, when running kubectl describe pod/orion-orion-64559b77fb-2x77w the response is as follows:

Name:                   orion-orion
Namespace:              default
CreationTimestamp:      Mon, 15 Feb 2021 18:14:51 +0200
Labels:                 app=orion
                        app.kubernetes.io/managed-by=Helm
                        chart=orion-0.1.7
                        heritage=Helm
                        release=orion
Annotations:            deployment.kubernetes.io/revision: 1
                        meta.helm.sh/release-name: orion
                        meta.helm.sh/release-namespace: default
Selector:               app=orion
Replicas:               2 desired | 2 updated | 2 total | 0 available | 2 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 25% max surge
Pod Template:
  Labels:       app=orion
                release=orion
  Annotations:  chaos.alpha.kubernetes.io/enabled: false
  Init Containers:
   create-indexes:
    Image:      mongo:3.2
    Port:       <none>
    Host Port:  <none>
    Command:
      mongo
      --host
      rs/mongo-rs-mongodb-replicaset
      orion
      --eval
      db = db.getSiblingDB("admin"); dbs = db.runCommand({ "listDatabases": 1 }).databases; dbs.forEach(function(database) { if(database.entities) database.entities.createIndexes([{"_id.id": 1}, {"_id.type": 1}, {"_id.servicePath": 1}, {"attrNames": 1}, {"creDate": 1}]); });
    Environment:  <none>
    Mounts:       <none>
  Containers:
   orion:
    Image:      fiware/orion:2.4.0
    Port:       1026/TCP
    Host Port:  0/TCP
    Command:
      contextBroker
      -fg
      -notificationMode
      transient
      -httpTimeout
      30000
      -logLevel
      INFO
      -dbhost
      mongo-rs-mongodb-replicaset
      -rplSet
      rs
      -dbTimeout
      10000
      -corsOrigin
      __ALL
      -reqMutexPolicy
      none
    Liveness:     http-get http://:1026/version delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:    http-get http://:1026/v2 delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      False   MinimumReplicasUnavailable
  Progressing    True    ReplicaSetUpdated
OldReplicaSets:  <none>
NewReplicaSet:   orion-orion-64559b77fb (2/2 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  10m    deployment-controller  Scaled up replica set orion-orion-64559b77fb to 1
  Normal  ScalingReplicaSet  9m44s  deployment-controller  Scaled up replica set orion-orion-64559b77fb to 2
➜  charts git:(master) ✗ reset

➜  charts git:(master) ✗ kubectl get all
NAME                               READY   STATUS                  RESTARTS   AGE
pod/orion-orion-64559b77fb-2x77w   0/1     Init:CrashLoopBackOff   6          11m
pod/orion-orion-64559b77fb-lkmf8   0/1     Init:CrashLoopBackOff   6          10m

NAME                  TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes    ClusterIP   10.96.0.1    <none>        443/TCP   3h23m
service/orion-orion   ClusterIP   None         <none>        80/TCP    11m

NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/orion-orion   0/2     2            0           11m

NAME                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/orion-orion-64559b77fb   2         2         0       11m

NAME                                              REFERENCE                TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/orion-orion   Deployment/orion-orion   <unknown>/50%   2         6         2          11m
➜  charts git:(master) ✗ kubectl describe pod/orion-orion-64559b77fb-2x77w
Name:         orion-orion-64559b77fb-2x77w
Namespace:    default
Priority:     0
Node:         docker-desktop/192.168.65.3
Start Time:   Mon, 15 Feb 2021 18:14:51 +0200
Labels:       app=orion
              pod-template-hash=64559b77fb
              release=orion
Annotations:  chaos.alpha.kubernetes.io/enabled: false
Status:       Pending
IP:           10.1.0.24
IPs:
  IP:           10.1.0.24
Controlled By:  ReplicaSet/orion-orion-64559b77fb
Init Containers:
  create-indexes:
    Container ID:  docker://523517a74de9ed6384bd0f610e4d0b72dad9b94594f599015ae6636cccdff5fe
    Image:         mongo:3.2
    Image ID:      docker-pullable://mongo@sha256:0463a91d8eff189747348c154507afc7aba045baa40e8d58d8a4c798e71001f3
    Port:          <none>
    Host Port:     <none>
    Command:
      mongo
      --host
      rs/mongo-rs-mongodb-replicaset
      orion
      --eval
      db = db.getSiblingDB("admin"); dbs = db.runCommand({ "listDatabases": 1 }).databases; dbs.forEach(function(database) { if(database.entities) database.entities.createIndexes([{"_id.id": 1}, {"_id.type": 1}, {"_id.servicePath": 1}, {"attrNames": 1}, {"creDate": 1}]); });
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Mon, 15 Feb 2021 18:21:58 +0200
      Finished:     Mon, 15 Feb 2021 18:22:12 +0200
    Ready:          False
    Restart Count:  6
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-tbvkf (ro)
Containers:
  orion:
    Container ID:
    Image:         fiware/orion:2.4.0
    Image ID:
    Port:          1026/TCP
    Host Port:     0/TCP
    Command:
      contextBroker
      -fg
      -notificationMode
      transient
      -httpTimeout
      30000
      -logLevel
      INFO
      -dbhost
      mongo-rs-mongodb-replicaset
      -rplSet
      rs
      -dbTimeout
      10000
      -corsOrigin
      __ALL
      -reqMutexPolicy
      none
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Liveness:       http-get http://:1026/version delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:1026/v2 delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-tbvkf (ro)
Conditions:
  Type              Status
  Initialized       False
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-tbvkf:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-tbvkf
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                 From               Message
  ----     ------     ----                ----               -------
  Normal   Scheduled  11m                 default-scheduler  Successfully assigned default/orion-orion-64559b77fb-2x77w to docker-desktop
  Normal   Pulled     9m2s (x5 over 11m)  kubelet            Container image "mongo:3.2" already present on machine
  Normal   Created    9m2s (x5 over 11m)  kubelet            Created container create-indexes
  Normal   Started    9m2s (x5 over 11m)  kubelet            Started container create-indexes
  Warning  BackOff    74s (x41 over 10m)  kubelet            Back-off restarting failed container

Seems that the problem is relates to the mongodb connection.
However, I haven't provided any customized configuration.
Is there something I could do to have a successful installation.
Thanks.

sorry, we lost this issue. did you deploy mongo before deploying orion?