minimum supported k8s version
olix0r opened this issue · 9 comments
From what I can tell, kube-runtime v0.63 does not work with k8s-openapi versions before v1.19 due to this line:
because the events API was not at v1 before v1.19. We get the following error:
error[E0432]: unresolved import `k8s_openapi::api::events::v1`
--> /home/ver/.cargo/registry/src/github.com-1ecc6299db9ec823/kube-runtime-0.63.1/src/events.rs:3:46
|
3 | api::{core::v1::ObjectReference, events::v1::Event as CoreEvent},
| ^^ could not find `v1` in `events`
Practically, 1.19 is probably a perfectly reasonable minimum supported version at this point. But this was a surprise to us since Linkerd's minimum supported version is currently 1.17, so we either need to bump our minimum version or stay on v0.61.
Ignoring Linkerd's issues... I think kube
needs to be explicit about its minimum supported version and communicate (i.e. in release notes, and ultimately semver) when support for a version is dropped. I'd even propose that kube
re-export k8s-openapi
with features for each of its supported versions so that this compatibility matrix is made explicit in kube's features.
As a suggestion, it's probably best for kube-rs to match the k8s support policy https://kubernetes.io/releases/patch-releases/ -- v1.19 is reaching its EOL in the next few hours, so it's probably perfectly fine to support v1.20+ (until February, when 1.20 reaches its EOL).
Thanks for reporting.
I think
kube
needs to be explicit about its minimum supported version and communicate (i.e. in release notes, and ultimately semver) when support for a version is dropped.
I don't think this was intentional. As far as I know, we try to support all versions supported by k8s-openapi
which is based on Cloud offerings. But yeah, we should be explicit, and test against all of them. This doesn't happen too often, but it's not the first time either (#621). I believe this will happen more often as we add more (non-core) resource specific utils.
I'd even propose that
kube
re-exportk8s-openapi
with features for each of its supported versions so that this compatibility matrix is made explicit in kube's features.
Adding version features to kube
is probably easier to maintain, and easier for the users as we add more resource-specific utilities. #621 was resolved by adding deprecated-crd-v1beta1
. But then we'll need to keep track of which version supports what, and that's going to be a mess. Also, I don't think we can tell if the user has enabled k8s-openapi
feature conflicting that.
@clux can you elaborate why you didn't want to introduce version features? (#621 (comment))
As a reference point, I think we're going to adopt a support policy that matches Kubernetes (linkerd/linkerd2#7171). At the very least, we want to be able to exercise the minimum supported version in k3d/kind, and that's difficult/impossible with versions like 1.17 (which reached EOL in January 2021).
Can't speak for @clux, but I don't like us having to maintain version features either:
- It blocks access to features added in new k8s versions until kube is updated (depending on how this is implemented, at least)
- It's redundant with k8s-openapi version features (or k8s-pb if we end up migrating to that)
- The same problem will appear for downstream crates as well
One immediate workaround would be to wrap the events module in a k8s_if_ge_1_19!
block.
Definitely agree that we need to agree on what we actually support, and actually test against that.
Yeah, the duplication was my main reason there originally. Although I do think there are some good benefits here that can be architected from it if we think about it from a fresh k8s-pb
perspective:
- if we re-exported features we could also re-export types (avoiding the duplicate selection problem)
- re-exporting versioning features lets us use
k8s-openapi
as a true dependency. Not a side-injected one that breaks dependabot upgrades withkube
. Not a required dev dep that causes constant problems internally with CI (dev dep specificity in all crates, packaging features, docs build features, resolver 2 complaints) - we can run ci test suites against all supported versions without having to code in specific feature hack override (like we currently have in examples)
- blocking access to new version features until kube was updated was always a problem anyway due to other minor changes in their api - if we manage the codegen then it should be easier to define a release process for this
As for what we support, I had suggested: min_supported(eks k8s version, aks k8s version, gke k8s version) which is currently min(1.17, 1.15, 1.19)
if i am reading the AKS one correctly.
There's four categories of dates here we can concern ourselves with:
- the aggressive k8s EOL 1.19 EOL today - implying >=1.20 only
- eks supports the kubernetes versions for longer implying OK to leave at least 1.18 around
- clusters lagging long behind normal EOL like AKS - implying OK to have 1.15 around (maybe 1.16 soon)
- kubernetes's skew policies tend to be 2 or 3 - implying ok to only support 1.20 or 1.21
Now, we could go full kubernetes, version after it, and use skew policies, but I don't think we are set up to do that yet - too much flux to maintain several branches of releases. Maybe after our big projects like gold + pb codegen are sorted out.
I also don't want to make users choose to NOT upgrade kube
because their infra team is overburdened and running in "trailing EKS mode", I think a little extra leeway on the EOL is helpful. It also doesn't cost us a lot since we don't currently integrate with a bunch of specific APIs (just admission
, crds
and now events
IIRC).
I think the best we can do for now is something like kubernetes EOL date + N months
:
N |
MK8SV |
---|---|
0 | 1.20 (today) |
1 | 1.19 (until 2021-11-28) |
4 | 1.19 (until 2022-02-28) |
5 | 1.18 (until 2021-11-18) |
9 | 1.18 (until 2022-02-18) |
10 | 1.17 (until 2021-11-13) |
12 | 1.17 (until 2022-01-13) |
and decide on an N
. E.g. 6 or 12 probably initially, then refine as we move towards full stability.
Fix for this specific aspect of the kubernetes minimum version is published in 0.63.2
by hiding the broken module for older kubernetes versions.
We will probably also codify some MSK8SV policy somehow, but for now, didn't want to bump it from like 1.15 to 1.19 just because of an accident.
Had a go at creating a CI matrix job for this, and could not get anywhere sensible due to the version features. Thinking that this has to be tackled at the codegen level instead by avoiding version features.
Been thinking more about this. Turns out we can define a minimum support and run simple tests against stable apis without changing k8s-openapi feature version at a start (because our tests don't rely on deprecated/alpha apis). This is done in #924 and policies are proposed in kube-rs/website#19.