Can't get autoscaling=true working with IAM role
motilevy opened this issue · 3 comments
I have the following IAM policy attached to a role my instances are on:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingInstances",
"ec2:DescribeInstances"
],
"Resource": [
"*"
]
}
]
}
Launching a container:
docker run -d --name rabbitmq --net=host \
-p 4369:4369 -p 5672:5672 \
-p 15672:15672 \
-p 25672:25672 \
-e AUTOCLUSTER_TYPE=aws \
-e AUTOCLUSTER_LOG_LEVEL=debug \
-e AUTOCLUSTER_CLEANUP=true \
-e AWS_AUTOSCALING=true \
-e AUTOCLUSTER_DELAY=10 \
-e CLEANUP_WARN_ONLY=false \
-e AWS_DEFAULT_REGION=us-east-2 \
aweber/rabbitmq-autocluster
fails with the following:
=INFO REPORT==== 31-Jan-2017::03:59:53 ===
node : rabbit@ip-10-5-11-218
home dir : /var/lib/rabbitmq
config file(s) : /usr/lib/rabbitmq/etc/rabbitmq/rabbitmq.config
cookie hash : iqG7DCBA+lxNNLQq/Y6efg==
log : tty
sasl log : tty
database dir : /var/lib/rabbitmq/mnesia
=INFO REPORT==== 31-Jan-2017::03:59:54 ===
autocluster: log level set to debug
=INFO REPORT==== 31-Jan-2017::03:59:54 ===
autocluster: Using AWS backend
=INFO REPORT==== 31-Jan-2017::03:59:54 ===
autocluster: Delaying startup for 8744ms.
=INFO REPORT==== 31-Jan-2017::04:00:03 ===
autocluster: Starting aws registration.
=ERROR REPORT==== 31-Jan-2017::04:00:03 ===
Failed to retrieve AWS credentials: undefined
=INFO REPORT==== 31-Jan-2017::04:00:03 ===
autocluster: Setting region: "us-east-2"
=ERROR REPORT==== 31-Jan-2017::04:00:03 ===
autocluster: Error fetching autoscaling group instance list: credentials
--- snip ---
BOOT FAILED
===========
Error description:
{could_not_start,rabbit,
{function_clause,
[{autocluster,maybe_register,
[error,aws,autocluster_aws],
[{file,"src/autocluster.erl"},{line,111}]},
{autocluster,init,0,[{file,"src/autocluster.erl"},{line,33}]},
{rabbit_boot_steps,'-run_step/2-lc$^1/1-1-',1,
[{file,"src/rabbit_boot_steps.erl"},{line,49}]},
{rabbit_boot_steps,run_step,2,
[{file,"src/rabbit_boot_steps.erl"},{line,49}]},
{rabbit_boot_steps,'-run_boot_steps/1-lc$^0/1-0-',1,
[{file,"src/rabbit_boot_steps.erl"},{line,26}]},
{rabbit_boot_steps,run_boot_steps,1,
[{file,"src/rabbit_boot_steps.erl"},{line,26}]},
{rabbit,start,2,[{file,"src/rabbit.erl"},{line,585}]},
{application_master,start_it_old,4,
[{file,"application_master.erl"},{line,273}]}]}}
If I add AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and nothing else to the above options it works.
I checked awscli and I am able to describe instances and auto scaling groups.
Any ideas?
Also just to make sure the policy is not the issue, i created a user and attached the same policy to it, and the cluster came up like a charm.
Which Alpine/Erlang/RabbitMQ Version are you using? I rebuild the container with Alpine 3.5, Erlang 19.1 and RabbitMQ 3.6.6. At least now the IAM role is used to retrieve information about the autoscaling group.
Alpine3.5 ( Installes erlang-19.1.0-r0 ) + Rabbit 3.6.6 worked like a charm.