Skip to main content

Ansible ec2 module "region must be specified" issue

Some month ago I made an Ansible based autoinstall for Hortonwork's HDP 2.2.
Since HDP 2.2.4.2 is out I wanted to update my install process and test how it works. However I had to realize that my previously working ansible playbooks are failing with an error message.

TASK: [Launching Ambari instance] *********************************************
failed: [localhost] => {"failed": true}
msg: region must be specified

FATAL: all hosts have already failed -- aborting

First I have checked my ansible, eucalyptus and boto config. However everything was fine. So I have checked the code of the ec2 module of ansible and found the error message in the source.

# tail -n +1205 /usr/share/pyshared/ansible/modules/core/cloud/amazon/ec2.py|head -17

ec2 = ec2_connect(module)

ec2_url, aws_access_key, aws_secret_key, region = get_ec2_creds(module)

if region:
try:
vpc = boto.vpc.connect_to_region(
region,
aws_access_key_id=aws_access_key,
aws_secret_access_key=aws_secret_key
)
except boto.exception.NoAuthHandlerFound, e:
module.fail_json(msg = str(e))
else:
module.fail_json(msg="region must be specified")

The code shows if region is not specified you get this error message and actually this is what the message itself writes.

BUT WHY DO YOU NEED A REGION? IT WAS NOT NECESSARY BEFORE! SEEMS TO BE A BUG!!!

After checking Ansible bugs I have found a bug describing my problem. One of the commenter also found the same code part suspicious. So I have modified it a bit to avoid the vpc related issue and after the modification all my playbooks started to work fine again.
1207c1207
< 
---
>     vpc=None
1219,1220c1219,1220
<     else:
<         module.fail_json(msg="region must be specified")
---
>     #else:
>     #    module.fail_json(msg="region must be specified")
1237d1236
< 

Side-by-side diff
ec2 = ec2_connect(module)                                       ec2 = ec2_connect(module)
                                                              |     vpc=None
    ec2_url, aws_access_key, aws_secret_key, region = get_ec2       ec2_url, aws_access_key, aws_secret_key, region = get_ec2

    if region:                                                      if region:
        try:                                                            try:
            vpc = boto.vpc.connect_to_region(                               vpc = boto.vpc.connect_to_region(
                region,                                                         region,
                aws_access_key_id=aws_access_key,                               aws_access_key_id=aws_access_key,
                aws_secret_access_key=aws_secret_key                            aws_secret_access_key=aws_secret_key
            )                                                               )
        except boto.exception.NoAuthHandlerFound, e:                    except boto.exception.NoAuthHandlerFound, e:
            module.fail_json(msg = str(e))                                  module.fail_json(msg = str(e))
    else:                                                     |     #else:
        module.fail_json(msg="region must be specified")      |     #    module.fail_json(msg="region must be specified")

Comments

Popular posts from this blog

Insufficient Disk Space reported under wine

Did you try to install/setup any Windows Application - actually a Game what else could be necessary - and got a message that you do not have enough free space on your drive meanwhile you had lot of free space on the chosen mounted partition? You will learn the problem and hopefully the solution too. (Of course I suppose it is not the real situation you have no enough space. If so do not read ahead.) The problem is that wine does not check the amount of free space on the mounted partition corresponds to the selected directory but reports the free on the root of the directory the partition mounted to . ;( Probably it is not clean so here is an example: Let say you have / only and something is mounted as /mnt/part1 If you directly select /mnt/part1 during installation wine will check free space in fact on / and does not calculate free on the partition mounted under /mnt/part1. How to solve it you may ask? It is easy. Start winecfg and create a new drive with the directory you want to use....

Ansible: Using multiple tags and untagged tag together

I have lots of Ansible playbooks with many roles in each. However when you are installing different minor version of the same software stack, there are only minor differences between the steps. In this case it does not make much sense to "copy paste" the whole role so I just wanted to use tags. I wanted to use untagged tasks as common tasks and tagged tasks for version specific tasks. To make it clear here is an example. If you have a long os related role which does ssh config, web config, database install and creation and many more but sometimes you need java-6 or java-7 it is easy to add task and tag those according to this. Than my theory was that I can run ansible-playbook --tags=untagged,java6 to install the stack with java6 and ansible-playbook --tags=untagged,java7 to install same stack with java7. However this does not work. I have checked the Ansible source code and found why it is not working. Since I was not sure if this is a bug or by design I have opened a ...

Hortonworks Hadoop HDP 2.0 lost default capacity scheduler config

As a result of my fault, and also result of strange behaviour of Ambari UI, I have overwritten my default capacity scheduler configuration data on my Hadoop Hortonworks HDP 2.0 cluster. Looking around I have found the xml file containing the original value as /var/lib/ambari-agent/cache/stacks/HDP/2.0._/services/YARN/configuration/capacity-scheduler.xml However on the UI you need a properties file style format. Here it is. yarn.scheduler.capacity.maximum-applications=10000 yarn.scheduler.capacity.maximum-am-resource-percent=0.2 yarn.scheduler.capacity.root.queues=default yarn.scheduler.capacity.root.capacity=100 yarn.scheduler.capacity.root.default.capacity=100 yarn.scheduler.capacity.root.default.user-limit-factor=1 yarn.scheduler.capacity.root.default.maximum-capacity=100 yarn.scheduler.capacity.root.default.state=RUNNING yarn.scheduler.capacity.root.default.acl_submit_jobs=* yarn.scheduler.capacity.root.default.acl_administer_jobs=* yarn.scheduler.capacity.root.acl_...