[Bug]: "Unable to locate credentials" when attempting "make shell"
Closed this issue · 2 comments
Installation method
Own AWS account
What happened?
When attempting to run make shell
in my console to set up a local development environment, I get the following error:
/usr/local/src/eks-workshop-v2$ make shell
bash hack/shell.sh '' '' ''
Warning: Defaulting region to us-west-2
Building container images...
sha256:ac23038495797d39abeb92dcb1658e0666762c3437b619f0b5a1a3c931130753
Starting shell in container...
Unable to locate credentials. You can configure credentials by running "aws configure".
make: *** [Makefile:27: shell] Error 253
What did you expect to happen?
I expected the container to create normally and drop me into the container's shell interface for local development
How can we reproduce it?
For me, I just followed the workflow in authoring_content.md
and I was using AWS temporary access keys added and verified by aws sts get-caller-identity
Anything else we need to know?
I was able to get into the container shell by manually short-circuiting the build process and executing docker run --rm -it -v /usr/local/src/eks-workshop-v2/manifests:/manifests -v /usr/local/src/eks-workshop-v2/cluster:/cluster -e AWS_REGION=us-west-2 -e AWS_ACCESS_KEY_ID=<access key> -e AWS_SECRET_ACCESS_KEY=<aws secret key> eks-workshop-environment bash
EKS version
/usr/local/src/eks-workshop-v2$ eksctl version
0.182.0
I've been spending the afternoon trying to track this down and after some debugging it looks like the command in shell.sh
in line 57 is not passing aws credentials to the container correctly.
Even though I have aws cli configured with credentials via aws configure
and verified with aws sts get-caller-identity
the environment variables for the access keys aren't getting set and passed to the container start command.
here is my debugging setup:
# -v $SCRIPT_DIR/../manifests:/manifests \
# -v $SCRIPT_DIR/../cluster:/cluster \
# -e 'EKS_CLUSTER_NAME' -e 'AWS_REGION' \
# $aws_credential_args $container_image $shell_command
echo "-" $aws_credential_args
echo "-" $container_image
echo "-" $shell_command
echo "-" $ASSUME_ROLE
which outputs
/usr/local/src/eks-workshop-v2$ make shell
bash hack/shell.sh '' '' ''
Warning: Defaulting region to us-west-2
Building container images...
sha256:ac23038495797d39abeb92dcb1658e0666762c3437b619f0b5a1a3c931130753
Starting shell in container...
-
- eks-workshop-environment
-
-
I was able to rework some of the logic in shell.sh
by adding the following between lines 39 and 40 which sets aws_credential_args
to pass through the correct current credentials; otherwise it appears the aws_credential_args
variable is set to an empty string which prevents the Docker container from launching correctly.
elif [ ! -z "$AWS_ACCESS_KEY_ID" ] && [ ! -z "$AWS_SECRET_ACCESS_KEY" ] && [ ! -z "$AWS_SESSION_TOKEN" ]; then
aws_credential_args="-e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN"
this is the output now:
/usr/local/src/eks-workshop-v2$ make shell
bash hack/shell.sh '' '' ''
Warning: Defaulting region to us-west-2
Building container images...
sha256:ac23038495797d39abeb92dcb1658e0666762c3437b619f0b5a1a3c931130753
Starting shell in container...
- -e AWS_ACCESS_KEY_ID=<aws_access_key> -e AWS_SECRET_ACCESS_KEY=<aws_secret_key> -e AWS_SESSION_TOKEN=<aws_session_token>
Updated context arn:aws:eks:us-west-2:927695479421:cluster/eks-workshop in /home/ec2-user/.kube/config
[ec2-user@229d7360ef40 environment]$
Can someone please double-check my work and see if this is expected behaviour or if I'm missing something in the aws configure
setup that is causing these issues?