kubernetes-csi/livenessprobe

support exec style liveness probes in addition to http

travisghansen opened this issue · 8 comments

I have scenarios where either or both of the node and controller services run in the hostNetwork namespace. Combined with the possibility of multiple deployments it means I need to eat through host ports (and reserve them) which is less than ideal.

What would be great is an ability to use the exec style probes and simply have the app connect to the uds and exit as appropriate.

The way we use liveness probes in CSI drivers is that the liveness sidecar exposes a port, but the actual liveness probe is run against the driver container. If the check fails, the driver container is killed and restarted.

Is it possible with exec probe? If the livenesprobe binary is in a sidecar with container liveness exec probe, only the sidecar gets restarted, which is not really useful.

@travisghansen, do you have any better idea?

@jsafrane You are correct. My thinking was that the project could include pre-built binaries as well as the images so if someone wanted to inject the binary straight into the driver container they could.

Having said that I ended up creating a little script inside my containers which replicated the functionality of this project but in exec style invocation. This was easy enough but not ideal if the logic ever gets more complicated etc.

I saw this project recently: https://github.com/grpc-ecosystem/grpc-health-probe

But I think the problem is it requires the csi plugin to use a grpc specific health probe call, instead of the custom csi probe call we have.

https://github.com/democratic-csi/democratic-csi/blob/master/bin/liveness-probe

What I did...obviously not ideal for drivers not built in nodejs.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.