Skip to main content

Ansible: Using multiple tags and untagged tag together

I have lots of Ansible playbooks with many roles in each. However when you are installing different minor version of the same software stack, there are only minor differences between the steps. In this case it does not make much sense to "copy paste" the whole role so I just wanted to use tags. I wanted to use untagged tasks as common tasks and tagged tasks for version specific tasks. To make it clear here is an example. If you have a long os related role which does ssh config, web config, database install and creation and many more but sometimes you need java-6 or java-7 it is easy to add task and tag those according to this. Than my theory was that I can run
ansible-playbook --tags=untagged,java6
to install the stack with java6 and
ansible-playbook --tags=untagged,java7
to install same stack with java7. However this does not work.

I have checked the Ansible source code and found why it is not working. Since I was not sure if this is a bug or by design I have opened a issue and described the problem.

Brian Coca was kind to answer quickly and as you can read he wrote this is by design but he was also kind to consider it as a feature request. Hope will be accepted. However if you need this modified behaviour now you can either check the issue or read the solution below.

Ansible 1.9


The corresponding code part is the tasks_to_run_in_play() function in playbook/__init__.py

Original code:
elif 'untagged' in self.only_tags:
    if task_set == u:
        should_run = True

Proposed fix:
elif 'untagged' in self.only_tags and task_set == u:
    should_run = True

Comments

Popular posts from this blog

Insufficient Disk Space reported under wine

Did you try to install/setup any Windows Application - actually a Game what else could be necessary - and got a message that you do not have enough free space on your drive meanwhile you had lot of free space on the chosen mounted partition? You will learn the problem and hopefully the solution too. (Of course I suppose it is not the real situation you have no enough space. If so do not read ahead.) The problem is that wine does not check the amount of free space on the mounted partition corresponds to the selected directory but reports the free on the root of the directory the partition mounted to . ;( Probably it is not clean so here is an example: Let say you have / only and something is mounted as /mnt/part1 If you directly select /mnt/part1 during installation wine will check free space in fact on / and does not calculate free on the partition mounted under /mnt/part1. How to solve it you may ask? It is easy. Start winecfg and create a new drive with the directory you want to use....

User based queue mapping for Capacity Scheduler

When I  started to use Capacity Scheduler hierarchical queue features on top of Hortonworks' HDP 2.0 I have immediately realized that I need automatic assignment of job to queue based on username. Sounds easy and useful? Yes! But could not find any configuration parameter and example for that. I found only references to use mapred.job.queuename config option. This can be configured in HIVE via set mapred.job.queuename=yourqueue or using -Dmapred.job.queuename=yourqueue as a hadoop command argument. After some hours of unavailing googling I have checked the corresponding code part and have been shocked. This is available only since HADOOP-2.6 (HDP-2.2). Check YARN-2411 for details. According to the CHANGELOG this is a relatively new feature. So sadly this is not available to me until an upgrade. :( See below an example based on YARN-2411 to use it in Hadoop 2.6 or higher for Hortonworks HDP-2.2 1. user1 is mapped to queue1, group1 is mapped to queue2: yarn.schedul...

Ansible ec2 module "region must be specified" issue

Some month ago I made an Ansible based autoinstall for Hortonwork's HDP 2.2. Since HDP 2.2.4.2 is out I wanted to update my install process and test how it works. However I had to realize that my previously working ansible playbooks are failing with an error message. TASK: [Launching Ambari instance] ********************************************* failed: [localhost] => {"failed": true} msg: region must be specified FATAL: all hosts have already failed -- aborting First I have checked my ansible, eucalyptus and boto config. However everything was fine. So I have checked the code of the ec2 module of ansible and found the error message in the source. # tail -n +1205 /usr/share/pyshared/ansible/modules/core/cloud/amazon/ec2.py|head -17 ec2 = ec2_connect(module) ec2_url, aws_access_key, aws_secret_key, region = get_ec2_creds(module) if region: try: vpc = boto.vpc.connect_to_region( region, aws_access_key_id=aws_access_key, aws_secret_access_key=aws_secr...