[release-1.10] Backport CDI 1.58.0 into 1.10 operator release
GingerGeek opened this issue · 2 comments
Is this a BUG REPORT or FEATURE REQUEST?: Feature Request
/kind enhancement
What happened: Current operator release 1.10 includes version 1.57.0 of CDI and related packages
What you expected to happen: Ideally would like 1.58.0 of CDI so we have default storage class specifically for virtualization (kubevirt/containerized-data-importer#2913)
How to reproduce it (as minimally and precisely as possible): N/A
Anything else we need to know?:
#2670 pulled in the CDI bump via a bot in this repo onto the main branch, is it possible to ask the bot to update the release branch?
Not sure if a backport such as this is something you would usually do?
Environment:
- HCO version (use
oc get csv -n kubevirt-hyperconverged
):kubevirt-hyperconverged-operator.v1.10.0
- Kubernetes version (use
kubectl version
):4.14.0-0.okd-2024-01-06-084517
v1.27.1-3474+5c56cc35e3edc1-dirty
- Cloud provider or hardware configuration: Bare metal
- Install tools:
- Others:
hi @GingerGeek,
is there any specific reason for this request?
As a general rule we try to keep parallel y-streams of the sibling projects.
So on HCO v1.10.z we will automatically bump v1.0.z releases of kubevirt/kubevirt, v1.57.z of CDI, v0.89.z of CNAO and so on.
We expect to consume CDI v1.58.z releases as part of HCO v1.11.z stream.
Hi,
Thanks for the explanation. My main motivation was to pull in kubervirt/containerized-data-importer#2913 that allows you to tag a storage class as the default for VMs.
My default storage class within the cluster was a standard ODF-like volume, so our VMs gave warnings related to VirtualMachineCRCErrors
. My solution currently is to switch the default cluster volume to a VM Storage class which contains "krbd:rxbounce"
map option.
My preference would be not to have to modify cluster wide defaults, and indeed prior to installing HCO I didn't even have a default StorageClass within the cluster.
Totally understand it's your usual process to not pull in an upstream update, so I'll close this for now and await the next release!