Some month ago I made an Ansible based autoinstall for Hortonwork's HDP 2.2.
Since HDP 2.2.4.2 is out I wanted to update my install process and test how it works. However I had to realize that my previously working ansible playbooks are failing with an error message.
First I have checked my ansible, eucalyptus and boto config. However everything was fine. So I have checked the code of the ec2 module of ansible and found the error message in the source.
The code shows if region is not specified you get this error message and actually this is what the message itself writes.
BUT WHY DO YOU NEED A REGION? IT WAS NOT NECESSARY BEFORE! SEEMS TO BE A BUG!!!
After checking Ansible bugs I have found a bug describing my problem. One of the commenter also found the same code part suspicious. So I have modified it a bit to avoid the vpc related issue and after the modification all my playbooks started to work fine again.
Side-by-side diff
Since HDP 2.2.4.2 is out I wanted to update my install process and test how it works. However I had to realize that my previously working ansible playbooks are failing with an error message.
TASK: [Launching Ambari instance] *********************************************
failed: [localhost] => {"failed": true}
msg: region must be specified
FATAL: all hosts have already failed -- aborting
msg: region must be specified
FATAL: all hosts have already failed -- aborting
First I have checked my ansible, eucalyptus and boto config. However everything was fine. So I have checked the code of the ec2 module of ansible and found the error message in the source.
# tail -n +1205 /usr/share/pyshared/ansible/modules/core/cloud/amazon/ec2.py|head -17
ec2 = ec2_connect(module)
ec2_url, aws_access_key, aws_secret_key, region = get_ec2_creds(module)
if region:
try:
vpc = boto.vpc.connect_to_region(
region,
aws_access_key_id=aws_access_key,
aws_secret_access_key=aws_secret_key
)
except boto.exception.NoAuthHandlerFound, e:
module.fail_json(msg = str(e))
else:
module.fail_json(msg="region must be specified")
ec2 = ec2_connect(module)
ec2_url, aws_access_key, aws_secret_key, region = get_ec2_creds(module)
if region:
try:
vpc = boto.vpc.connect_to_region(
region,
aws_access_key_id=aws_access_key,
aws_secret_access_key=aws_secret_key
)
except boto.exception.NoAuthHandlerFound, e:
module.fail_json(msg = str(e))
else:
module.fail_json(msg="region must be specified")
The code shows if region is not specified you get this error message and actually this is what the message itself writes.
BUT WHY DO YOU NEED A REGION? IT WAS NOT NECESSARY BEFORE! SEEMS TO BE A BUG!!!
After checking Ansible bugs I have found a bug describing my problem. One of the commenter also found the same code part suspicious. So I have modified it a bit to avoid the vpc related issue and after the modification all my playbooks started to work fine again.
1207c1207
<
---
> vpc=None
1219,1220c1219,1220
< else:
< module.fail_json(msg="region must be specified")
---
> #else:
> # module.fail_json(msg="region must be specified")
1237d1236
<
Side-by-side diff
ec2 = ec2_connect(module) ec2 = ec2_connect(module) | vpc=None ec2_url, aws_access_key, aws_secret_key, region = get_ec2 ec2_url, aws_access_key, aws_secret_key, region = get_ec2 if region: if region: try: try: vpc = boto.vpc.connect_to_region( vpc = boto.vpc.connect_to_region( region, region, aws_access_key_id=aws_access_key, aws_access_key_id=aws_access_key, aws_secret_access_key=aws_secret_key aws_secret_access_key=aws_secret_key ) ) except boto.exception.NoAuthHandlerFound, e: except boto.exception.NoAuthHandlerFound, e: module.fail_json(msg = str(e)) module.fail_json(msg = str(e)) else: | #else: module.fail_json(msg="region must be specified") | # module.fail_json(msg="region must be specified")
Comments
Post a Comment