pvc-xxx-jiva-ctrl-yyy depends on openebs/jiva:ci
hudec opened this issue · 2 comments
What steps did you take and what happened:
A have installation based on helm with all images in private registry
k get all
NAME READY STATUS RESTARTS AGE
pod/openebs-jiva-csi-controller-0 5/5 Running 0 110m
pod/openebs-jiva-csi-node-9pv94 3/3 Running 0 110m
pod/openebs-jiva-csi-node-jt2ds 3/3 Running 0 110m
pod/openebs-jiva-csi-node-nz96j 3/3 Running 0 110m
pod/openebs-jiva-csi-node-tjtdx 3/3 Running 0 110m
pod/openebs-jiva-csi-node-vdj9t 3/3 Running 0 110m
pod/openebs-jiva-operator-98df8b7b5-f98pp 1/1 Running 0 110m
pod/openebs-localpv-provisioner-84cb775f46-xxbmv 1/1 Running 0 110m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/openebs-jiva-csi-node 5 5 5 5 5 110m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/openebs-jiva-operator 1/1 1 1 110m
deployment.apps/openebs-localpv-provisioner 1/1 1 1 110m
NAME DESIRED CURRENT READY AGE
replicaset.apps/openebs-jiva-operator-98df8b7b5 1 1 1 110m
replicaset.apps/openebs-localpv-provisioner-84cb775f46 1 1 1 110m
NAME READY AGE
statefulset.apps/openebs-jiva-csi-controller 1/1 110m
helm values for images are like the next one
replica:
image:
registry: brbs2p.ros.czso.cz:5000/
repository: openebs/jiva
tag: 3.0.0
What did you expect to happen:
To be able create jiva volume
During jiva volume initialization I have the following error
Events:
Type Reason Age From Message
Normal Scheduled 5m56s default-scheduler Successfully assigned openebs/pvc-523cbfc1-1d35-4abf-ab73-940b41de94db-jiva-ctrl-6d5f59dkqk5t to arbs1p.ros.czso.cz
Normal Pulled 5m46s kubelet Container image "brbs2p.ros.czso.cz:5000/openebs/m-exporter:3.0.0" already present on machine
Normal Created 5m44s kubelet Created container maya-volume-exporter
Normal Started 5m44s kubelet Started container maya-volume-exporter
Warning Failed 5m5s (x3 over 5m46s) kubelet Failed to pull image "openebs/jiva:ci": rpc error: code = Unknown desc = Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on 10.143.68.16:53: lame referral
Warning Failed 5m5s (x3 over 5m46s) kubelet Error: ErrImagePull
Warning Failed 4m27s (x6 over 5m43s) kubelet Error: ImagePullBackOff
Normal Pulling 4m12s (x4 over 5m55s) kubelet Pulling image "openebs/jiva:ci"
Normal BackOff 48s (x21 over 5m43s) kubelet Back-off pulling image "openebs/jiva:ci"
I'm no able to specify something like in helm values
image:
registry: brbs2p.ros.czso.cz:5000/
repository: openebs/jiva:ci
Moreover, openebs/jiva:ci is very old image
The next problem is, tehat after volume deletion in
init-pvc-13ca2cf4-f81d-4511-94a1-c2fcf5b8107b
I can see another problem
Events:
Type Reason Age From Message
Normal Scheduled 60s default-scheduler Successfully assigned openebs/init-pvc-13ca2cf4-f81d-4511-94a1-c2fcf5b8107b to arbs1p.ros.czso.cz
Normal Pulling 14s (x3 over 58s) kubelet Pulling image "openebs/linux-utils:3.0.0"
Warning Failed 14s (x3 over 58s) kubelet Failed to pull image "openebs/linux-utils:3.0.0": rpc error: code = Unknown desc = Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on 10.143.68.16:53: lame referral
Warning Failed 14s (x3 over 58s) kubelet Error: ErrImagePull
Normal BackOff 0s (x4 over 57s) kubelet Back-off pulling image "openebs/linux-utils:3.0.0"
Warning Failed 0s (x4 over 57s) kubelet Error: ImagePullBackOff
Again, I cant specify openebs/linux-utils:3.0.0 to be taken from private registry
Regards
Vlado Hudec
Hi @hudec Thank you for raising the bug. I have raised a fix for the same and it would be great if can review the PR.
For the image openebs/linux-utils
as it comes from the dependency chart https://github.com/openebs/jiva-operator/blob/3bb1ca70d984fd12352f1cc0c619e4e1e6e70993/deploy/helm/charts/Chart.yaml#L25
You can use the flag localpv-provisioner.helperPod.image.registry
and all the values can be found here
https://github.com/openebs/dynamic-localpv-provisioner/tree/develop/deploy/helm/charts
The fix is merged and the updated operator yaml fixes the above issue. Please reopen the issue if you hit this again