kubernetes-sigs/gcp-compute-persistent-disk-csi-driver

Flaky CMEK test fails with "Volume lifecycle should have failed, but succeeded

pwschuurman opened this issue · 6 comments

I0811 17:03:01.758] �[0mGCE PD CSI Driver �[38;5;243mShould create CMEK key, go through volume lifecycle, validate behavior on key revoke and restore �[38;5;9m�[1m[It] on pd-standard�[0m
I0811 17:03:01.758] �[38;5;243m/go/src/sigs.k8s.io/gcp-compute-persistent-disk-csi-driver/test/e2e/tests/single_zone_e2e_test.go:640�[0m
I0811 17:03:01.758] 
I0811 17:03:01.758]   �[38;5;9m[FAILED] Volume lifecycle should have failed, but succeeded
I0811 17:03:01.758]   Expected
I0811 17:03:01.759]       <nil>: nil
I0811 17:03:01.759]   not to be nil�[0m
I0811 17:03:01.759]   �[38;5;9mIn �[1m[It]�[0m�[38;5;9m at: �[1m/go/src/sigs.k8s.io/gcp-compute-persistent-disk-csi-driver/test/e2e/tests/single_zone_e2e_test.go:617�[0m �[38;5;243m@ 08/11/23 17:02:57.271�[0m

/kind flake

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.