kubernetes-sigs/aws-ebs-csi-driver

namespaceOverride for aws-ebs-csi-node daemonset

RuStyC0der opened this issue · 8 comments

Is your feature request related to a problem? Why is this needed?

Hi, I was trying to enforce PSS using PSA in my cluster as much as possible. I tried to install aws-ebs-csi into a restricted namespace but encountered an error because the aws-ebs-csi-node daemonset requires a hostPath mount (along with privileged mode), which violates PSS. I wanted to deploy aws-ebs-csi-node into another unrestricted namespace, but the chart has no option like namespaceOverride: "aws-ebs-csi-node" for aws-ebs-csi-node daemonset (controller should stay in helm release namespace).

/feature

Describe the solution you'd like in detail

It would be great to add a namespaceOverride: "aws-ebs-csi-node" option to the Helm values, so you can deploy this daemonset to a separate namespace.

Describe alternatives you've considered

I have forked this repo and added support for the namespaceOverride: "aws-ebs-csi-node" option for the aws-ebs-csi-node daemonset, and it worked great.

If you also find this feature useful and good to have, I'd be happy to create a pull request.

Hi @RuStyC0der , you should be able to specify the namespace of the driver during installation via the --namespace parameter to Helm. For example:

helm upgrade --install aws-ebs-csi-driver --namespace my-custom-namespace aws-ebs-csi-driver/aws-ebs-csi-driver

Can you elaborate on what usecase this does not cover?

Sure you can set namespace with helm args but I want to deploy only aws-ebs-csi-node daemonset into separate namespace (with unrestricted pss) and keep controller in restricted namespace (since it can meet pod security standards)

Hi @ConnorJC3, any thoughts on this feature?

Hi, thanks for the update - I'll bring this up with the team and we'll discuss if we want to accept a feature to deploy the node and controller in separate namespaces, I should be able to get a response back to you in a few days.

Hi @ConnorJC3, sorry for bothering you, did you have a chance to discuss this feature with the team?

@RuStyC0der Hey, sorry about the delay - the team just discussed it and we're ok with accepting this feature if you're willing to PR it. Feel free to open up a PR and we will get it reviewed.

/close

Due to #2052

@AndrewSirenko: Closing this issue.

In response to this:

/close

Due to #2052

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.