TimeoutSeconds from metav1.ListOptions{} doesn't work as expected.
Kulagin-G opened this issue · 4 comments
There is a part of my code where I use *dynamic.DynamicClient
client with List
resources by requested parameters:
start := time.Now()
apps, err := g.k8sClient.
Resource(appGVR).
Namespace(g.cfg.KubeApi.MetricConfig.Namespace).
List(context.TODO(),
metav1.ListOptions{
LabelSelector: g.cfg.KubeApi.MetricConfig.LabelSelector,
FieldSelector: g.cfg.KubeApi.MetricConfig.FieldSelector,
TimeoutSeconds: &g.cfg.KubeApi.MetricConfig.TimeoutSeconds,
Limit: g.cfg.KubeApi.MetricConfig.Limit,
},
)
elapsed := time.Since(start).Seconds()
The real request time can take about 5s
, and even if I set TimeoutSeconds
to 1s
I don't see any effect.
Based on my quick research:
- We add
TimeoutSeconds
as aurl.Values
param value to the*rest.Request
object during request preparation- https://github.com/kubernetes/client-go/blob/v0.27.4/rest/request.go#L372 - Next
client
invokesDo()
method withtimeout
set as default value 0 every time - https://github.com/kubernetes/client-go/blob/master/rest/request.go#L1061 req.URL.Query()
example -labelSelector=prometheus%2Fis-infra-app-metric%3Dtrue%2Capplications.argoproj.io%2Ftemplate_type%3Dapplication&timeoutSeconds=1
I'm not sure request.timeout
should be overridden, but this timeout
is being used in context for dropping a request that looks like a target place to override with value from metav1.ListOptions
- https://github.com/kubernetes/client-go/blob/v0.27.4/rest/request.go#L969
Correct me please if I'm wrong. Thank you!
P.S. I use the latest release version k8s.io/client-go@v0.27.4
, the same behavior for the latest v0.29.0-alpha.0
P.S.S. LabelSelector
string is also being added to r.params
and works as expected.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten