Skip to main content

Hortonworks HDP 2 moving master componenst to other nodes

In a need to decommission a node from a Hadoop cluster based on HDP 2.2.4.2 I have realised that Ambari 2.0 delivered with HDP 2.2.4.2 can not move History Server and Falcon Server master components to another node. Simply the functionality is missing. I could use the Ambari web UI for every other service I wanted to move but not these two.

So looking around I found a this mail

which I am summarising in the below command set.

  • Stop Falcon Server and History Server via the Ambari UI.
  • Execute below commands and do not forget to specify values for the first five lines :)
    AMBARI_SERVER_HOST=
    CLUSTERNAME=mycluster
    HOSTNAME=
    TARGET_HOSTNAME=
    PASS=

    curl -i -u admin:${PASS} -H 'X-Requested-By: ambari' -X DELETE http://${AMBARI_SERVER_HOST}:8080/api/v1/clusters/${CLUSTERNAME}/hosts/${HOSTNAME}/host_components/HISTORYSERVER

    curl -i -u admin:${PASS} -H 'X-Requested-By: ambari' -X POST -d'{"HostRoles":{"component_name":"HISTORYSERVER"}}' http://${AMBARI_SERVER_HOST}:8080/api/v1/clusters/${CLUSTERNAME}/hosts/${TARGET_HOSTNAME}/host_components

    curl -i -u admin:${PASS} -H 'X-Requested-By: ambari' -X DELETE http://${AMBARI_SERVER_HOST}:8080/api/v1/clusters/${CLUSTERNAME}/hosts/${HOSTNAME}/host_components/FALCON_SERVER

    curl -i -u admin:${PASS} -H 'X-Requested-By: ambari' -X POST -d'{"HostRoles":{"component_name":"FALCON_SERVER"}}' http://${AMBARI_SERVER_HOST}:8080/api/v1/clusters/${CLUSTERNAME}/hosts/${TARGET_HOSTNAME}/host_components
  • After the commands have been executed go to the Ambari UI and click Re-Install for the services on the new host.
  • As noted in the e-mail please update the values of mapreduce.jobhistory.address and mapreduce.jobhistory.webapp.address of Mapreduce2 via the Ambari UI.
  • Please also update *.broker.url under Falcon -> Config -> Falcon startup.properties
  • When installation finished start the services.

Comments

Popular posts from this blog

Insufficient Disk Space reported under wine

Did you try to install/setup any Windows Application - actually a Game what else could be necessary - and got a message that you do not have enough free space on your drive meanwhile you had lot of free space on the chosen mounted partition? You will learn the problem and hopefully the solution too. (Of course I suppose it is not the real situation you have no enough space. If so do not read ahead.) The problem is that wine does not check the amount of free space on the mounted partition corresponds to the selected directory but reports the free on the root of the directory the partition mounted to . ;( Probably it is not clean so here is an example: Let say you have / only and something is mounted as /mnt/part1 If you directly select /mnt/part1 during installation wine will check free space in fact on / and does not calculate free on the partition mounted under /mnt/part1. How to solve it you may ask? It is easy. Start winecfg and create a new drive with the directory you want to use....

User based queue mapping for Capacity Scheduler

When I  started to use Capacity Scheduler hierarchical queue features on top of Hortonworks' HDP 2.0 I have immediately realized that I need automatic assignment of job to queue based on username. Sounds easy and useful? Yes! But could not find any configuration parameter and example for that. I found only references to use mapred.job.queuename config option. This can be configured in HIVE via set mapred.job.queuename=yourqueue or using -Dmapred.job.queuename=yourqueue as a hadoop command argument. After some hours of unavailing googling I have checked the corresponding code part and have been shocked. This is available only since HADOOP-2.6 (HDP-2.2). Check YARN-2411 for details. According to the CHANGELOG this is a relatively new feature. So sadly this is not available to me until an upgrade. :( See below an example based on YARN-2411 to use it in Hadoop 2.6 or higher for Hortonworks HDP-2.2 1. user1 is mapped to queue1, group1 is mapped to queue2: yarn.schedul...

Ansible ec2 module "region must be specified" issue

Some month ago I made an Ansible based autoinstall for Hortonwork's HDP 2.2. Since HDP 2.2.4.2 is out I wanted to update my install process and test how it works. However I had to realize that my previously working ansible playbooks are failing with an error message. TASK: [Launching Ambari instance] ********************************************* failed: [localhost] => {"failed": true} msg: region must be specified FATAL: all hosts have already failed -- aborting First I have checked my ansible, eucalyptus and boto config. However everything was fine. So I have checked the code of the ec2 module of ansible and found the error message in the source. # tail -n +1205 /usr/share/pyshared/ansible/modules/core/cloud/amazon/ec2.py|head -17 ec2 = ec2_connect(module) ec2_url, aws_access_key, aws_secret_key, region = get_ec2_creds(module) if region: try: vpc = boto.vpc.connect_to_region( region, aws_access_key_id=aws_access_key, aws_secret_access_key=aws_secr...