kubernetes/enhancements

Azure disk in-tree to CSI driver migration

davidz627 opened this issue Β· 70 comments

Enhancement Description

Parent enhancement: #625
Public Migration testing CI: LINK

TODO

  • Replace design/KEP with specific one if needed
  • Link public migration testing CI

/sig storage

azure disk & file won't go BETA on 1.18 since there is dependency on csi windows, if we go beta, that means all requests for built-in driver would be redirected to csi driver, while csi driver does not work on Windows, that will break azure disk & file driver on Windows

/sig cloud-provider
/area provider/azure

@davidz627 thanks for driving this

@andyzhangx feel free to edit description, you are the owner of this issue now :)

@feiskyer @andyzhangx should we track this for 1.18? It looks like your description indicates you plan to go beta on this, but the linked issue indicates you're blocked by some windows things?

/milestone v1.19

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

Hi @feiskyer -- 1.19 Enhancements Lead here, I wanted to check in if you think this enhancement would graduate to Beta in 1.19?


The current release schedule is:

  • Monday, April 13: Week 1 - Release cycle begins
  • Tuesday, May 19: Week 6 - Enhancements Freeze
  • Thursday, June 25: Week 11 - Code Freeze
  • Thursday, July 9: Week 14 - Docs must be completed and reviewed
  • Tuesday, August 4: Week 17 - Kubernetes v1.19.0 released

Azure disk & azure file migration would go beta in 1.19, here is the PR: kubernetes/kubernetes#90896

Hi @andyzhangx πŸ‘‹ , thank you for the update. I've updated the tracking sheet accordingly.

Let us know if there are any other PR's that you would want to track for this release.

/stage beta
/milestone v1.19

Hi @andyzhangx - I am Savitha, 1.19 Docs lead. Does this enhancement work planned for 1.19 require any new docs (or modifications to existing docs)? If not, can you please update the 1.19 Enhancement Tracker Sheet, or let me know, I can do it for you :)
If docs are required, just a friendly reminder we're looking for a PR against k/website (branch dev-1.19) due by Friday, June 12, it can just be a placeholder PR at this time. Let me know if you have any questions!

Hi @andyzhangx -- Do you have any implementation other than kubernetes/kubernetes#90896? If yes, can you please link them here - k/k or otherwise? πŸ™‚


The current release schedule is:

  • Monday, April 13: Week 1 - Release cycle begins
  • Tuesday, May 19: Week 6 - Enhancements Freeze
  • Thursday, June 25: Week 11 - Code Freeze
  • Thursday, July 9: Week 14 - Docs must be completed and reviewed
  • Tuesday, August 4: Week 17 - Kubernetes v1.19.0 released

Hi @andyzhangx - Just a reminder that docs placeholder PR against dev-1.19 is due by June 12th. Does this enhancement require any changes to docs? If so, can you update here with a link to the PR once you have it in place? If not, please update the same so that the tracking sheet can be updated accordingly. Thanks!

Hi @andyzhangx - Just a reminder that docs placeholder PR against dev-1.19 is due by June 12th. Does this enhancement require any changes to docs? If so, can you update here with a link to the PR once you have it in place? If not, please update the same so that the tracking sheet can be updated accordingly. Thanks!

@savitharaghunathan here is the PR, we are going to go beta for azure disk migration first, and azure file migration keeps alpha in 1.19, thanks.

Hi @andyzhangx -- pinging back again to see if there are any implementation PRs other than kubernetes/kubernetes#90896 ? Thanks. πŸ™‚

Hi @andyzhangx -- pinging back again to see if there are any implementation PRs other than kubernetes/kubernetes#90896 ? Thanks. πŸ™‚

no, thanks @palnabarun

Thanks @andyzhangx πŸ‘

Update: we don't need such a docs PR for this issue update.

Hi @andyzhangx -- just wanted to check in about the progress of the enhancement. From the above comments, can we assume that kubernetes/kubernetes#90896 is the only implementation PR and the graduation requirements for this enhancement is complete?

The release timeline has been revised recently, more details of which can be found here.

Please let me know if you have any questions. πŸ™‚


The revised release schedule is:

  • Thursday, July 9th: Week 13 - Code Freeze
  • Thursday, July 16th: Week 14 - Docs must be completed and reviewed
  • Tuesday, August 25th: Week 20 - Kubernetes v1.19.0 released

From the above comments, can we assume that kubernetes/kubernetes#90896 is the only implementation PR and the graduation requirements for this enhancement is complete?

Hi @andyzhangx πŸ‘‹, can you please update regarding the above? πŸ™‚

Hi @andyzhangx -- just wanted to check in about the progress of the enhancement. From the above comments, can we assume that kubernetes/kubernetes#90896 is the only implementation PR and the graduation requirements for this enhancement is complete?

The release timeline has been revised recently, more details of which can be found here.

Please let me know if you have any questions. πŸ™‚

The revised release schedule is:

  • Thursday, July 9th: Week 13 - Code Freeze
  • Thursday, July 16th: Week 14 - Docs must be completed and reviewed
  • Tuesday, August 25th: Week 20 - Kubernetes v1.19.0 released

@palnabarun Azure disk in-tree to CSI driver migration go beta in v1.19.0(with PR kubernetes/kubernetes#90896), and Azure file in-tree to CSI driver migration remains alpha. Thanks.

Hi @andyzhangx πŸ‘‹, thank you for the update. Usually, the enhancement graduates either completely or stays in the same state.

Since this enhancement involves two distinct functionalities, I am thinking if a new enhancement should be created tracking the Azure file in-tree to CSI driver migration. This enhancement can be reworded to involve just Azure disk in-tree to CSI driver migration and can graduate to Beta in 1.19. The new enhancement concerning Azure file in-tree to CSI driver migration can graduate in later releases.

Please let me know what do you think about it.

Thank you. πŸ™‚

Hi @andyzhangx πŸ‘‹, thank you for the update. Usually, the enhancement graduates either completely or stays in the same state.

Since this enhancement involves two distinct functionalities, I am thinking if a new enhancement should be created tracking the Azure file in-tree to CSI driver migration. This enhancement can be reworded to involve just Azure disk in-tree to CSI driver migration and can graduate to Beta in 1.19. The new enhancement concerning Azure file in-tree to CSI driver migration can graduate in later releases.

Please let me know what do you think about it.

Thank you. πŸ™‚

@palnabarun agree, I have created a new issue to track Azure File CSI driver migration: #1885
@davidz627 could you reset the title to Azure disk in-tree to CSI driver migration? thanks.

@andyzhangx -- Thanks for creating #1885 πŸ‘

/retitle Azure disk in-tree to CSI driver migration

@andyzhangx -- I have also retitled this issue and updated the issue description.

/milestone clear

/milestone v1.20

it's already beta in v1.19

Hi @andyzhangx

Enhancements Lead here. I have updated the description to reflect beta in 1.19. Any plans for this in 1.20?

Thanks!
Kirsten

Hi again @andyzhangx

Just wanted to circle back and check to see if you're planning to graduate to stable in 1.20?

As an FYI, 1.20 Enhancements Freeze is October 6th.

I do note that this KEP is using the old template, if you could update to the new one that would be great. See for ref https://github.com/kubernetes/enhancements/tree/master/keps/NNNN-kep-template

I also note that the implementation history of the KEP needs to be updated.

Thanks!
Kirsten

@andyzhangx @craiglpeters

I noticed that a 1.20 milestone was added to this, but was that in error, it doesn't seem that this would be moving to GA in 1.20? Can someone please clarify?

Spoke to @msau42 and this will be remaining in beta for 1.20

/milestone clear

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

/remove-lifecycle rotten

@andyzhangx should we move this to GA now?

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

/reopen
/remove-lifecycle rotten

@msau42: Reopened this issue.

In response to this:

/reopen
/remove-lifecycle rotten

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

/milestone v1.23
An exception request was filed on 11/12/21 for the 1.23 Enhancements Freeze. The exception request was approved on 11/13/21
Please update the kep.yaml to reflect the latest milestone (1.23) and have the PR to update the kep.yaml and any open k/k PRs approved by 18:00 PST on November 16th
Linked k/k PR: kubernetes/kubernetes#104670

This enhancement does not graduate phases for 1.23, it remains in beta. The default behavior changes from off to on in 1.23

KEP milestone update: #3049
Docs: kubernetes/website#30495

/milestone v1.24

/stage stable
/tracked yes

Hi @andyzhangx! 1.24 Enhancements team here. Just checking in as we approach enhancements freeze at 18:00pm PT on Thursday Feb 3rd. This enhancement is targeting stable for 1.24.

Here’s where this enhancement currently stands:

The status of this enhancement is at risk. Thanks!

@davidz627 could you mark the target as 1.25 as GA in this issue? Thanks.

@davidz627 BTW, you will submit a PR to mark the GA status for all cloud provider drivers in k/k, right?

@andyzhangx can you confirm if you want to target GA in 1.24 or 1.25?

@msau42 Azure disk target GA in 1.24

You will submit a PR to mark the GA status for all cloud provider drivers in k/k, right?

@andyzhangx I think it's better if you submit the PR after you have finished all the necessary testing and other GA requirements.

You will submit a PR to mark the GA status for all cloud provider drivers in k/k, right?

@andyzhangx I think it's better if you submit the PR after you have finished all the necessary testing and other GA requirements.

@msau42 here is the PR: kubernetes/kubernetes#107681

I see that #3109 has been merged. I have updated the status to tracked for enhancement freeze.

Hi @andyzhangx, 1.24 Docs shadow here. πŸ‘‹

This enhancement is marked as Needs Docs for the 1.24 release.

Please follow the steps detailed in the documentation to open a PR against the dev-1.24 branch in the k/website repo. This PR can be just a placeholder at this time and must be created before Thursday 31st March 2022, 18:00 PDT.

Also, if needed take a look at Documenting for a release to familiarize yourself with the docs requirement for the release.

Thank you! πŸ™Œ

Hi @andyzhangx

I'm checking in as we approach 1.24 code freeze at 01:00 UTC Wednesday 30th March 2022.

Please ensure the following items are completed:

  • All PRs to the Kubernetes repo that are related to your enhancement are linked in the above issue description (for tracking purposes).
  • All PRs are fully merged by the code freeze deadline.

For this KEP, it looks like just k/k#107681 needs to be merged. Let me know if there are any other PRs that we should be tracking for 1.24 code freeze.

Let me know if you have any questions.

Friendly reminder to try to merge k/k#107681 before code freeze at 01:00 UTC Wednesday 30th March 2022.

kubernetes/kubernetes#107681 is already merged, so we could close this issue now? @msau42

Hello @andyzhangx @davidz627 πŸ‘‹, 1.25 Enhancements team here.

This feature has been fully implemented and is now GA in K8s version 1.24. πŸŽ‰

Please update the kep.yaml file's status to implemented and close this issue.

This would help accurate tracking in the v1.25 cycle. Thank you so much! πŸ™‚

thanks, I have worked out a PR: #3300

/close

@andyzhangx: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.