kubernetes-sigs/image-builder

prow jobs failing without `PACKER_GITHUB_API_TOKEN`

mboersma opened this issue · 6 comments

What steps did you take and what happened:

There have been many failures in CI recently due to this:

hack/ensure-packer.sh
packer_1.9.2_linux_amd64.zip: OK
Archive:  packer_1.9.2_linux_amd64.zip
  inflating: packer                  
'packer' has been installed to /home/prow/go/src/sigs.k8s.io/image-builder/images/capi/.local/bin, make sure this directory is in your $PATH
hack/ensure-goss.sh
/root/.packer.d/plugins/packer-provisioner-goss: OK
/home/prow/go/src/sigs.k8s.io/image-builder/images/capi/.local/bin/packer init packer/config.pkr.hcl
Failed getting the "github.com/hashicorp/ansible" plugin:
1 error occurred:
	* Plugin host rate limited the plugin getter. Try again in 12m59.953283047s.
HINT: Set the PACKER_GITHUB_API_TOKEN env var with a token to get more requests.
GET https://api.github.com/repos/hashicorp/packer-plugin-ansible/git/matching-refs/tags: 403 API rate limit exceeded for 18.189.147.8. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.) [rate reset in 13m00s]


make: *** [Makefile:63: deps-ami] Error 1

What did you expect to happen:

Packer should install its plugins without error.

Anything else you would like to add:

Examples of this failure (feel free to add to this list!):

/kind bug

It looks like the cluster-api projects are seeing similar problems since migrating to the community EKS cluster: kubernetes/org#4165

Taken from kubernetes/test-infra#30501 (comment)

I explain on Slack why rate limiting is more frequent in EKS Cluster than in the GKE Clusters. It is to do with the limited set of IPs available for egress traffic. On AWS we are using NAT Gateways but on GCP, every GKE node has it is own IP address so there is a larger pool of egress addresses available.

With regards to getting a token:

Ask for a read-only token in the #github-management channel on Slack from the existing bot accounts
Get ameukam to load it in here kubernetes/k8s.io@be85c53/infra/gcp/terraform/k8s-infra-prow-build/prow-build/resources/test-pods/externalsecrets.yaml

I'll reach out in #github-management to see if we can get a token sorted and take it from there.

Edit: Asked here: https://kubernetes.slack.com/archives/C01672LSZL0/p1692945837121219

I've created a specific request issue: kubernetes/org#4412

Generic request to add a token for all projects to use: kubernetes/org#4433

This now seems to be fixed with the change to using public IPs with the node in the community EKS cluster. Closing for now and can re-open if it reoccurs.

I will be watching the upstream issues and report back any changes from those.