kubernetes-sigs/aws-iam-authenticator

[Bug]: Kubernetes Client-go informers getting "Unauthorized" error after 15 mins

jeevanragula opened this issue · 7 comments

What happened?

Kubernetes client-go informers getting unauthorized error after 15 mins.

As per Kubernetes Client Go blogs and discussion we see the client go wil refresh the token after 15 mins but it is not happening. Any way to refresh the token without stopping the informer?

`

gen, err := token.NewGenerator(false, false)
if err != nil {
	return token.Token{}, err
}


opts := &token.GetTokenOptions{
	Region:               cluster.Region,
	ClusterID:            aws.StringValue(&cluster.Name),
	AssumeRoleARN:        cluster.AssumeRoleConfig.RoleArn,
	AssumeRoleExternalID: cluster.AssumeRoleConfig.ExternalId,
	SessionName:          "testsession",
	Session:              awsSession,
}
token, err := gen.GetWithOptions(opts)
    clientConfig := &rest.Config{
	Host:        cluster.Endpoint,
	BearerToken: token.Token,
	TLSClientConfig: rest.TLSClientConfig{
		CAData: ca,
	},
}
    dynamicClient, err := dynamic.NewForConfig(clientConfig)

    factory := dynamicinformer.NewFilteredDynamicSharedInformerFactory(dynamicClient, 60*time.Minute, "", nil)	
    gvr := schema.GroupVersionResource{Group: "apps", Version: "v1 ", Resource: "deployments"}

informer := factory.ForResource(gvr).Informer()

informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
	AddFunc: func(obj interface{}) {
		fmt.Println(obj)
	},

	UpdateFunc: func(old, new interface{}) {
		fmt.Println(old)
		fmt.Println(new)
	},
})

factory.Start(ctx.Done())

`

What you expected to happen?

The Kubernetes Auth Token created by aws-iam-authenticator should be refreshed automatically.

Anything else we need to know?

No response

Installation tooling

other (please specify in description)

AWS IAM Authenticator server Version

AWS EKS Managed service

Client information

- OS/arch: Darwin/arm64 & Linux/amd64
- kubernetes client & version: k8s.io/client-go v0.25.2
- authenticator client & version: sigs.k8s.io/aws-iam-authenticator v0.5.9

Kubernetes API Version

Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.0", GitCommit:"a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2", GitTreeState:"clean", BuildDate:"2022-08-23T17:36:43Z", GoVersion:"go1.19", Compiler:"gc", Platform:"darwin/arm64"} Kustomize Version: v4.5.7 Server Version: version.Info{Major:"1", Minor:"22+", GitVersion:"v1.22.15-eks-fb459a0", GitCommit:"be82fa628e60d024275efaa239bfe53a9119c2d9", GitTreeState:"clean", BuildDate:"2022-10-24T20:33:23Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"} WARNING: version difference between client (1.25) and server (1.22) exceeds the supported minor version skew of +/-1

aws-iam-authenticator YAML manifest

No response

kube-apiserver YAML manifest

No response

aws-iam-authenticator logs

No response

@nckturner Do you have any idea or do you suggest any way to refresh the tokens?
I see you have added this expiration feature to aws-iam-authenticator module.

@jeevanragula write a timed task and check it regularly. This is how I currently handle it

thanks @jeevanragula @fengshunli for your feedback.

have you tried the client-go-credential-plugins? It works for me. client-go informers keeps running.

You can find doc https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins

a kubeconfig sample with aws-iam-authenticator link is https://github.com/kubernetes-sigs/aws-iam-authenticator/blob/master/hack/dev/kubeconfig.yaml. (also pasted below)

go code sample https://github.com/kubernetes-sigs/aws-iam-authenticator/blob/master/tests/e2e/tests.go#L69

Thank you

users:
- name: kind-authenticator
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - token
      - -i
      - {{CLUSTER_NAME}}
      command: {{AUTHENTICATOR_BIN}}
      env:
      - name: AWS_STS_REGIONAL_ENDPOINTS
        value: regional
      - name: AWS_DEFAULT_REGION
        value: {{REGION}}
      interactiveMode: IfAvailable
      provideClusterInfo: false

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.