canonical/minio-operator

Minio failed to upgrade 1.6 to 1.7

Closed this issue · 7 comments

Minio failed to upgrade with error message:

ERROR Juju on containers does not support updating storage on a statefulset.
The new charm's metadata contains updated storage declarations.
You'll need to deploy a new charm rather than upgrading if you need this change.
 not supported (not supported)

Jira

I am able to reproduce this issue after refreshing minio ckf-1.6/stable to latest/edge. This looks like something out of the control of Charmed Kubeflow though. Will investigate more.

Logs:

ubuntu@ip-172-31-13-175:~$ juju refresh minio --channel latest/edge
Added charm-hub charm "minio", revision 159 in channel latest/edge, to the model
ERROR Juju on containers does not support updating storage on a statefulset.
The new charm's metadata contains updated storage declarations.
You'll need to deploy a new charm rather than upgrading if you need this change.
 not supported (not supported)

Looking at the error message, it seems like the latest metadata.yaml contains "updated storage declarations", doing a bit of research I see this commit may be causing this disruption. I will try to remove that setting and see if I can upgrade afterwards.

I managed to get data across from old to new charm. However, it was done without scaling Minio to zero and than back up to one, i.e. there is a possibility of data inconsistency due to someone or some service accessing Minio when data migration is happening.

Looking at the error message, it seems like the latest metadata.yaml contains "updated storage declarations", doing a bit of research I see this commit may be causing this disruption. I will try to remove that setting and see if I can upgrade afterwards.

Yes. That was done to accommodate request to have minimal storage for Minio in gateway mode. But if it breaks user experience, I think we should revert it.
We can add recommendation:
When deploying Minio in gateway mode set storage to 1G during deployment:
juju deploy minio --storage minio-date=1G

What actually worries me is that when I scale Minio down using K8S API:

kubectl scale statefulset minio -n kubeflow --replicas=0

Then back up again:

kubectl scale statefulset minio -n kubeflow --replicas=1

All data from that PVC disappears.
So, if K8S decides to scale down our SatetfulSets for some reason, the data on the storage will be wiped out on scale up.

Works. Verified in latest upgrade. (After changed reverted)