Problem with container region
marcin99 opened this issue · 4 comments
After applying vpc-cni from yaml files, container addresses are in the form:
602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-network-policy-agent:v1.0.7
and can be retrieved.
During installation as an EKS plugin in the AWS console, they appear as:
602401143452.dkr.ecr.eu-central-1.amazonaws.com/amazon/aws-network-policy-agent:v1.0.7
and I get:
pull access denied, repository does not exist or may require authorization: authorization failed: no basic auth credentials
Moreover, even if I install it from the yaml file, vpc-cni still doesn't work because it tries to fetch dependencies from the eu-central-1 region instead of us-west-2
Warning FailedCreatePodSandBox 10s (x8 over 104s) kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox image "602401143452.dkr.ecr.eu-central-1.amazonaws.com/eks/pause:3.1-eksbuild.1": failed to pull image "602401143452.dkr.ecr.eu-central-1.amazonaws.com/eks/pause:3.1-eksbuild.1": failed to pull and unpack image "602401143452.dkr.ecr.eu-central-1.amazonaws.com/eks/pause:3.1-eksbuild.1": failed to resolve reference "602401143452.dkr.ecr.eu-central-1.amazonaws.com/eks/pause:3.1-eksbuild.1": pull access denied, repository does not exist or may require authorization: authorization failed: no basic auth credentials
@marcin99 this looks like you are unable to pull the pause container, not the VPC CNI or network policy agent image. This looks more like awslabs/amazon-eks-ami#1597
In general, you should be able to pull the VPC CNI image from us-west-2
in eu-central-1
region.
ok, thanks @jdn5126, my workaround for bottlerocket nodes, in bootstrap_extra_args:
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/pause:3.1-eksbuild.1"
and now aws-node pods are running
ok, thanks @jdn5126, my workaround for bottlerocket nodes, in bootstrap_extra_args:
[plugins."io.containerd.grpc.v1.cri"] sandbox_image = "602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/pause:3.1-eksbuild.1"
and now aws-node pods are running
Ah that's a good workaround, and it makes sense that the issue was with the pause image. In that issue I linked, the AMI team is racing to release new AMIs where the pause image is properly pinned to avoid garbage collection.