nchammas/flintrock

Authentication failure while launching cluster

mkhan037 opened this issue · 1 comments

  • Flintrock version: 1.0.0
  • Python version: Python 3.8.8
  • OS: CentOS Linux release 7.6.1810 (Core)

Hi, I am a new user of AWS and found flintrock to be very straightforward to use. I previously played around by following this guide. However, after a few months, I again tried using flintrock and faced authentication issues while launching a test spark cluster. I removed the previous IAM user and started from scratch but still facing the same issue. I am giving the error message below.

(aws)[mkhan@inv36 aws]$ flintrock launch spark-cluster
Traceback (most recent call last):
  File "/home/mkhan/anaconda/anaconda3/envs/aws/bin/flintrock", line 8, in <module>
    sys.exit(main())
  File "/home/mkhan/anaconda/anaconda3/envs/aws/lib/python3.8/site-packages/flintrock/flintrock.py", line 1187, in main
    cli(obj={})
  File "/home/mkhan/anaconda/anaconda3/envs/aws/lib/python3.8/site-packages/click/core.py", line 764, in __call__
    return self.main(*args, **kwargs)
  File "/home/mkhan/anaconda/anaconda3/envs/aws/lib/python3.8/site-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/home/mkhan/anaconda/anaconda3/envs/aws/lib/python3.8/site-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/mkhan/anaconda/anaconda3/envs/aws/lib/python3.8/site-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/mkhan/anaconda/anaconda3/envs/aws/lib/python3.8/site-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "/home/mkhan/anaconda/anaconda3/envs/aws/lib/python3.8/site-packages/click/decorators.py", line 17, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/home/mkhan/anaconda/anaconda3/envs/aws/lib/python3.8/site-packages/flintrock/flintrock.py", line 433, in launch
    cluster = ec2.launch(
  File "/home/mkhan/anaconda/anaconda3/envs/aws/lib/python3.8/site-packages/flintrock/ec2.py", line 53, in wrapper
    res = func(*args, **kwargs)
  File "/home/mkhan/anaconda/anaconda3/envs/aws/lib/python3.8/site-packages/flintrock/ec2.py", line 840, in launch
    vpc_id = get_default_vpc(region=region).id
  File "/home/mkhan/anaconda/anaconda3/envs/aws/lib/python3.8/site-packages/flintrock/ec2.py", line 431, in get_default_vpc
    default_vpc = list(
  File "/home/mkhan/anaconda/anaconda3/envs/aws/lib/python3.8/site-packages/boto3/resources/collection.py", line 83, in __iter__
    for page in self.pages():
  File "/home/mkhan/anaconda/anaconda3/envs/aws/lib/python3.8/site-packages/boto3/resources/collection.py", line 166, in pages
    for page in pages:
  File "/home/mkhan/anaconda/anaconda3/envs/aws/lib/python3.8/site-packages/botocore/paginate.py", line 255, in __iter__
    response = self._make_request(current_kwargs)
  File "/home/mkhan/anaconda/anaconda3/envs/aws/lib/python3.8/site-packages/botocore/paginate.py", line 332, in _make_request
    return self._method(**current_kwargs)
  File "/home/mkhan/anaconda/anaconda3/envs/aws/lib/python3.8/site-packages/botocore/client.py", line 276, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/home/mkhan/anaconda/anaconda3/envs/aws/lib/python3.8/site-packages/botocore/client.py", line 586, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (AuthFailure) when calling the DescribeVpcs operation: AWS was not able to validate the provided access credentials

My config file is given below

provider: ec2

providers:
  ec2:
    key-name: spark_cluster
    identity-file: /home/mkhan/aws/spark_cluster.pem
    instance-type: t2.micro
    region: us-east-2
    # availability-zone: <name>
    ami: ami-089c6f2e3866f0f14  # Amazon Linux 2, us-east-1
    user: ec2-user
    # ami: ami-61bbf104  # CentOS 7, us-east-1
    # user: centos
    # spot-price: <price>
    # vpc-id: <id>
    # subnet-id: <id>
    # placement-group: <name>
    # security-groups:
    #   - group-name1
    #   - group-name2
    # instance-profile-name:
    # tags:
    #   - key1,value1
    #   - key2, value2  # leading/trailing spaces are trimmed
    #   - key3,  # value will be empty
    # min-root-ebs-size-gb: <size-gb>
    tenancy: default  # default | dedicated
    ebs-optimized: no  # yes | no
    instance-initiated-shutdown-behavior: terminate  # terminate | stop
    # user-data: /path/to/userdata/script

launch:
  num-slaves: 1
  install-hdfs: True
  install-spark: True

Turns out the issue was related to time in my Linux machine being slightly off than ec2. After time was synced, it issue went away.