Improve documentation on how to authenticate to AWS
yorinasub17 opened this issue · 4 comments
This is frequently a point of confusion, especially for those that use AWS profiles. We should at the least link to our blog post series https://blog.gruntwork.io/a-comprehensive-guide-to-authenticating-to-aws-on-the-command-line-63656a686799 in the README.
Hey @yorinasub17 I have problem using aws_profile with aws provider:
provider "aws" { access_key = var.aws_access_key secret_key = var.aws_secret_key region = var.aws_region profile = var.aws_profile }
During apply, I see these errors and finally it times out:
module.eks_cluster.null_resource.wait_for_api (local-exec): time="2020-04-28T13:01:12-05:00" level=warning msg="Error retrieving cluster info Error finding AWS credentials. Did you set the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables or configure an AWS profile? Underlying error: NoCredentialProviders: no valid providers in chain. Deprecated.\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors" name=kubergrunt
Is this because profile is now allowed while using kubergrunt/terraform-aws-eks module?
kubergrunt
doesn't use the credentials from the aws profile (because it is an external binrary), and instead relies on the environment. AFAIK, this feature (sharing authentication between scripts and provider) is not possible without using environment variables because terraform does not export the credentials generated from the provider config.
If you want to use aws profiles, the recommended way is to use the environment variable AWS_PROFILE
. E.g., AWS_PROFILE=dev terraform apply
This comes from the recommended practice of decoupling the credentials from your config by using external tooling such as aws-vault
or the credentials profile and provide them as environment variables. This is mentioned in https://blog.gruntwork.io/a-comprehensive-guide-to-authenticating-to-aws-on-the-command-line-63656a686799 .
The there multiple use-case with AWS profiles (~/.aws/config, ~/.aws/credentials). If you are using ~/.aws/config
you need:
AWS_SDK_LOAD_CONFIG=1
AWS_PROFILE=<your profile>
....
kubergrunt ...
I was using AWS profiles ($HOME/.aws/config, $HOME/.aws/credentials
) for quite some time and I was also using AWS_SDK_LOAD_CONFIG=1
and AWS_PROFILE=<my profile>
.
After changing terraform backend repository to S3 suddenly it stop working. I was able to make it work again using environment variables (export AWS_SECRET_ACCESS_KEY=xxxxx; export AWS_ACCESS_KEY_ID=yyyyyy
).
Since for me was not ideal I was able to revert the changes to use AWS profiles again but I had to remove . terraform
folder and run terraform init
again
Not sure if it can help in this case.