Storage Class being ignored for harbor-jobservice and upgrades fail
Steve-Gilmore opened this issue · 3 comments
Hi, when installing Harbor fresh on a new K8s cluster parts of my values.yaml file are simply ignore and I do not know why.
The relevant part of here:
jobservice:
storageClass: "nfs-client"
accessMode: ReadWriteOnce
size: 1Gi
The storage class is simply not applied. To get things working properly I have to patch the configuration after the dust has settled.
kubectl patch pvc harbor-jobservice -n harbor --type='json' -p='[{"op": "replace", "path": "/spec/storageClassName", "value":"nfs-client"}]
Performing any form of upgrade results in the core container losing access to the database. I'm up and running, but i have no idea how I will manage this install moving forward.
After upgrade:
2024-03-09T22:20:03Z [INFO] [/common/dao/base.go:67]: Registering database: type-PostgreSQL host-harbor-database port-5432 database-registry sslmode-"disable"
[ORM]2024/03/09 22:20:03 register db Ping `default`, failed to connect to `host=harbor-database user=postgres database=registry`: failed SASL auth (FATAL: password authentication failed for user "postgres" (SQLSTATE 28P01))
2024-03-09T22:20:03Z [FATAL] [/core/main.go:184]: failed to initialize database: register db Ping `default`, failed to connect to `host=harbor-database user=postgres database=registry`: failed SASL auth (FATAL: password authentication failed for user "postgres" (SQLSTATE 28P01))
Helm version and chart version:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
harbor harbor 1 2024-03-09 22:22:06.134698936 +0000 UTC deployed harbor-1.14.0 2.10.0 ```
K8s version info:
```kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
dev-kube-master01.home.local Ready control-plane 17d v1.29.2 192.168.14.15 <none> Ubuntu 22.04.4 LTS 6.1.74-12781-g74961fb0a5d2 containerd://1.6.28
dev-kube-worker02.home.local Ready worker 16d v1.29.2 192.168.14.16 <none> Ubuntu 22.04.4 LTS 6.1.74-12781-g74961fb0a5d2 containerd://1.6.28
dev-kube-worker03 Ready worker 7d19h v1.29.2 192.168.14.17 <none> Ubuntu 22.04.4 LTS 5.15.0-97-generic containerd://1.6.28```
Volume claim info:
```kubectl -n harbor get pvc harbor-jobservice -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
helm.sh/resource-policy: keep
meta.helm.sh/release-name: harbor
meta.helm.sh/release-namespace: harbor
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: cluster.local/nfs-client-provisioner-nfs-subdir-external-provisioner
volume.kubernetes.io/storage-provisioner: cluster.local/nfs-client-provisioner-nfs-subdir-external-provisioner
creationTimestamp: "2024-03-09T22:22:25Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: harbor
app.kubernetes.io/managed-by: Helm
chart: harbor
component: jobservice
heritage: Helm
release: harbor
name: harbor-jobservice
namespace: harbor
resourceVersion: "2805200"
uid: 9de8d65a-97e8-4f9c-985d-3f1c25f0c9ad
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs-client
volumeMode: Filesystem
volumeName: pvc-9de8d65a-97e8-4f9c-985d-3f1c25f0c9ad
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
phase: Bound
Let me know if there is other data to provide.
Hi @Steve-Gilmore ,
I think there's two parts of question here.
-
Could you explain more how this sc been ignored? Seems your PVC has correct storageclass name and bounded already. BWT, the PVC spec is immutable, is your PVC can be patched even when Bound status?
-
is other pvc using same SC
nfs-client
as harbor-jobservice? And how about the bellowing info?
kubectl get pod <harbor-jobservice-pod > -o json | jq -r '.spec.volumes'
- for the db issue, from the error msg it is a password authentication error. Please check if there's any password update or inconsistent
database.password
configuration in values.yaml before and after upgrade. - Also what's your harbor-helm version?
I missed this the first time I was installing as well.
The correct path is persistence.jobservice.jobLog.storageClass
(notice jobLog
), easy to miss if you're not paying attention since it's sandwiched in the persistence
section alongside other components that are in the [component].storageClass
format except that one.
closing it now as the @caguiclajmg figures out that it should be persistence.jobservice.jobLog.storageClass
.