Skip to main content

Hortonworks HDP 2 moving master componenst to other nodes

In a need to decommission a node from a Hadoop cluster based on HDP 2.2.4.2 I have realised that Ambari 2.0 delivered with HDP 2.2.4.2 can not move History Server and Falcon Server master components to another node. Simply the functionality is missing. I could use the Ambari web UI for every other service I wanted to move but not these two.

So looking around I found a this mail

which I am summarising in the below command set.

  • Stop Falcon Server and History Server via the Ambari UI.
  • Execute below commands and do not forget to specify values for the first five lines :)
    AMBARI_SERVER_HOST=
    CLUSTERNAME=mycluster
    HOSTNAME=
    TARGET_HOSTNAME=
    PASS=

    curl -i -u admin:${PASS} -H 'X-Requested-By: ambari' -X DELETE http://${AMBARI_SERVER_HOST}:8080/api/v1/clusters/${CLUSTERNAME}/hosts/${HOSTNAME}/host_components/HISTORYSERVER

    curl -i -u admin:${PASS} -H 'X-Requested-By: ambari' -X POST -d'{"HostRoles":{"component_name":"HISTORYSERVER"}}' http://${AMBARI_SERVER_HOST}:8080/api/v1/clusters/${CLUSTERNAME}/hosts/${TARGET_HOSTNAME}/host_components

    curl -i -u admin:${PASS} -H 'X-Requested-By: ambari' -X DELETE http://${AMBARI_SERVER_HOST}:8080/api/v1/clusters/${CLUSTERNAME}/hosts/${HOSTNAME}/host_components/FALCON_SERVER

    curl -i -u admin:${PASS} -H 'X-Requested-By: ambari' -X POST -d'{"HostRoles":{"component_name":"FALCON_SERVER"}}' http://${AMBARI_SERVER_HOST}:8080/api/v1/clusters/${CLUSTERNAME}/hosts/${TARGET_HOSTNAME}/host_components
  • After the commands have been executed go to the Ambari UI and click Re-Install for the services on the new host.
  • As noted in the e-mail please update the values of mapreduce.jobhistory.address and mapreduce.jobhistory.webapp.address of Mapreduce2 via the Ambari UI.
  • Please also update *.broker.url under Falcon -> Config -> Falcon startup.properties
  • When installation finished start the services.

Comments

Popular posts from this blog

Insufficient Disk Space reported under wine

Did you try to install/setup any Windows Application - actually a Game what else could be necessary - and got a message that you do not have enough free space on your drive meanwhile you had lot of free space on the chosen mounted partition? You will learn the problem and hopefully the solution too. (Of course I suppose it is not the real situation you have no enough space. If so do not read ahead.) The problem is that wine does not check the amount of free space on the mounted partition corresponds to the selected directory but reports the free on the root of the directory the partition mounted to . ;( Probably it is not clean so here is an example: Let say you have / only and something is mounted as /mnt/part1 If you directly select /mnt/part1 during installation wine will check free space in fact on / and does not calculate free on the partition mounted under /mnt/part1. How to solve it you may ask? It is easy. Start winecfg and create a new drive with the directory you want to use....

Ansible: Using multiple tags and untagged tag together

I have lots of Ansible playbooks with many roles in each. However when you are installing different minor version of the same software stack, there are only minor differences between the steps. In this case it does not make much sense to "copy paste" the whole role so I just wanted to use tags. I wanted to use untagged tasks as common tasks and tagged tasks for version specific tasks. To make it clear here is an example. If you have a long os related role which does ssh config, web config, database install and creation and many more but sometimes you need java-6 or java-7 it is easy to add task and tag those according to this. Than my theory was that I can run ansible-playbook --tags=untagged,java6 to install the stack with java6 and ansible-playbook --tags=untagged,java7 to install same stack with java7. However this does not work. I have checked the Ansible source code and found why it is not working. Since I was not sure if this is a bug or by design I have opened a ...

Hortonworks Hadoop HDP 2.0 lost default capacity scheduler config

As a result of my fault, and also result of strange behaviour of Ambari UI, I have overwritten my default capacity scheduler configuration data on my Hadoop Hortonworks HDP 2.0 cluster. Looking around I have found the xml file containing the original value as /var/lib/ambari-agent/cache/stacks/HDP/2.0._/services/YARN/configuration/capacity-scheduler.xml However on the UI you need a properties file style format. Here it is. yarn.scheduler.capacity.maximum-applications=10000 yarn.scheduler.capacity.maximum-am-resource-percent=0.2 yarn.scheduler.capacity.root.queues=default yarn.scheduler.capacity.root.capacity=100 yarn.scheduler.capacity.root.default.capacity=100 yarn.scheduler.capacity.root.default.user-limit-factor=1 yarn.scheduler.capacity.root.default.maximum-capacity=100 yarn.scheduler.capacity.root.default.state=RUNNING yarn.scheduler.capacity.root.default.acl_submit_jobs=* yarn.scheduler.capacity.root.default.acl_administer_jobs=* yarn.scheduler.capacity.root.acl_...