kubernetes-sigs/gcp-compute-persistent-disk-csi-driver

Update the Prow Jobs definitions for the Windows OS versions supported

mauriciopoppe opened this issue · 15 comments

I saw that even after the fix in #1046 Windows 20H2 is still failing, it might be because SAC is no longer supported.

At the same time check if there's support for ltsc2022, if so then we should also add a prow job for it.

/kind failing-test

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

With the migration to CSI Proxy v2 in #1071, the new HPC base image should be able to support any Windows server versions, though we need to do explicit CI/CD work to tag images with different Windows versions we plan on supporting.

In any case, SAC is no longer supported after the migration, and the test script has been modified in that PR to reflect this, so this issue can be safely closed when the PR gets merged.

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

/remove-lifecycle rotten

@leiyiz @mattcary I recently saw that https://testgrid.k8s.io/provider-gcp-compute-persistent-disk-csi-driver#ci-windows-2019-provider-gcp-compute-persistent-disk-csi-driver started failing again :(.

In an example failed run it looks like it timedout prepulling images, would it be possible for you to take a look?

W0321 06:32:27.931] daemonset.apps/prepull-test-containers created
W0321 06:32:27.931] + wait_on_prepull
W0321 06:32:27.932] + retries=180
W0321 06:32:27.932] + [[ 180 -ge 0 ]]
W0321 06:32:27.932] ++ kubectl get daemonset prepull-test-containers -o 'jsonpath={.status.numberReady}'
W0321 06:32:27.933] + ready=0
W0321 06:32:27.933] ++ kubectl get daemonset prepull-test-containers -o 'jsonpath={.status.desiredNumberScheduled}'
W0321 06:32:27.934] + required=3
W0321 06:32:27.934] + [[ 0 -eq 3 ]]
W0321 06:32:27.934] + (( retries-- ))
W0321 06:32:27.934] + sleep 10s
W0321 06:32:27.934] + [[ 179 -ge 0 ]]
W0321 06:32:27.934] ++ kubectl get daemonset prepull-test-containers -o 'jsonpath={.status.numberReady}'
W0321 06:32:27.934] + ready=0
W0321 06:32:27.934] ++ kubectl get daemonset prepull-test-containers -o 'jsonpath={.status.desiredNumberScheduled}'
W0321 06:32:27.935] + required=3
W0321 06:32:27.935] + [[ 0 -eq 3 ]]
W0321 06:32:27.935] + (( retries-- ))
W0321 06:32:27.936] + sleep 10s
W0321 06:32:27.936] + [[ 178 -ge 0 ]]
W0321 06:32:27.936] ++ kubectl get daemonset prepull-test-containers -o 'jsonpath={.status.numberReady}'
W0321 06:32:27.937] + ready=0
W0321 06:32:27.937] ++ kubectl get daemonset prepull-test-containers -o 'jsonpath={.status.desiredNumberScheduled}'
W0321 06:32:27.937] + required=3
W0321 06:32:27.937] + [[ 0 -eq 3 ]]
W0321 06:32:27.937] + (( retries-- ))
W0321 06:32:27.937] + sleep 10s
W0321 06:32:27.938] + [[ 177 -ge 0 ]]
W0321 06:32:27.938] ++ kubectl get daemonset prepull-test-containers -o 'jsonpath={.status.numberReady}'
W0321 06:32:27.938] + ready=0
W0321 06:32:27.938] ++ kubectl get daemonset prepull-test-containers -o 'jsonpath={.status.desiredNumberScheduled}'
W0321 06:32:27.938] + required=3

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

/remove-lifecycle rotten