kubernetes-client/javascript

Incorrect cached token fetched with multiple kubeconfigs

sheldonkwok opened this issue · 7 comments

When using multiple kubeconfig files with the same user.name field, the kubeconfig user that is fetched first for execAuth will be used for all subsequent users that match the same name.

The issue arises here.
https://github.com/kubernetes-client/javascript/blob/master/src/exec_auth.ts#L74

I'm not sure if it's an antipattern to use multiple kubeconfigs but maybe we could document this if we don't want to fix it. Having different user names is easy enough. We could also hash the whole user object to cache instead of the name. I can implement this solution or another proposed solution if we want to address this.

Multiple kubeconfigs is really not supported very well in this client. It is definitely an unusual configuration in the wild.

That said, I'd be happy to take PRs to improve the handling of multiple kubeconfig files (or providing more explicit "we don't support this" messages)

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

/remove-lifecycle stale

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

#658
provides documentation

Cant we resolve this by increasing the specificity of the cache?

Such as, instead of const cachedToken = this.tokenCache[user.name];

Then the cache index includes the cluster name + cluster server + username?

LeoK80 commented

As a work around could you perhaps merge multiple kubeconfig files into one?

This still allows you to switch the 'current context' of which there only should be 1 field. The config.ts KubeConfig Class does seem to provide a public method to set the current context to a new value, effectively allowing you to target other clusters without requiring multiple kubeconfig files.

public setCurrentContext(context: string): void {