Operand configurability
marquiz opened this issue · 16 comments
A meta-issue (inspired by #18) where I tried to collect things that should be configurable. The mechanism to use would be the operand CRD.
Master:
- mTLS configuration
-
--ca-file
,--cert-file
and--key-file
-
--verify-node-name
-
-
--extra-label-ns
-
--resource-labels
- support
--prune
in some way to help with un-installation??
Worker:
I think many of these could be covered by supporting configurable args for both master and worker:
type NodeFeatureDiscoverySpec struct {
OperandNamespace string `json:"operandNamespace"`
OperandImage string `json:"operandImage"`
OperandMasterArgs []string `json:"operandMasterArgs"`
OperandWorkerArgs []string `json:"operandWorkerArgs"`
}
Perhaps TLS should be managed by cert-manager?
I think many of these could be covered by supporting configurable args for both master and worker:
Sure, they're just hard to verify and so easy to mess up. I don't know if this is a usual pattern in the "operator world", though. I don't think it would be much harder to add separate fields for the important command line flags. Maybe the args could be a first aid, though and useful in debugging or very exotic use cases(?)
Perhaps TLS should be managed by cert-manager?
Yes it should, there's even an issue about that but I haven't had bandwidth for that.
Sure, they're just hard to verify and so easy to mess up. I don't know if this is a usual pattern in the "operator world", though. I
We use validating webhooks to validate some contents in more detailed beyond what is not possible with openapi schema validation.
/kind feature
/label test-label
@ArangoGutierrez: The label(s) /label test-label
cannot be applied. These labels are supported: api-review, community/discussion, community/maintenance, community/question, cuj/build-train-deploy, cuj/multi-user, platform/aws, platform/azure, platform/gcp, platform/minikube, platform/other, tide/merge-method-merge, tide/merge-method-rebase, tide/merge-method-squash
In response to this:
/label test-label
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
sorry for that, testing some prow things for this repo
#36 will address user-specified worker config through ConfigMap
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
@ArangoGutierrez could you educate me on this. Do we already support most of this?
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.