Skip to main content

Hortonworks Data Platform 2.0 (Alpha)

In the last days I was testing Hortonworks Data Platform 2.0 (Alpha). Previously I mainly used Cloudera distributions but because of this bug in CDH 4.1.3 I wanted to test alternatives. And I choose HDP.


Note: This bug practically means that using RCFILE is useless with hive-0.9.0. The column pruning is not used by hive at all. Now it seems that the problem is in HIVE-0.9.0.

Unfortunately there is a bug also in HDP 2.0. This is not so serious however. When Ambari is  used for automated installation it can fail with  "Oozie test Fails" or if Oozie is not selected than with "Hive/HCatalog test Fails" message and the deployment log will show the following error message:

 "\"Sun Mar 03 21:38:03 +0100 2013 /Stage[2]/Hdp2-hive::Hive::Service_check/Exec[/tmp/hiveSmoke.sh]/returns (notice): FAILED: Hive Internal Error: org.apache.hadoop.hive.ql.metadata.HiveException(MetaException(message:Could not connect to meta store using any of the URIs provided))\"",

Searched for that message and found this thread mentioning that similar error can be caused by setting MYSQL host instead of leaving blank. 

I made many installation to tetst it and this is true. If you specify MYSQL host - even if you specify it properly - installation is always failing. But workaround is easy. Just leave MYSQL host field empty.

Note: I really like the Hortonworks approach - installation, configuration file handling and operation - compared to the Cloudera one but also missing some features like  decommissioning, role changes (datanode,tasktracker) of nodes.

Comments

Popular posts from this blog

Insufficient Disk Space reported under wine

Did you try to install/setup any Windows Application - actually a Game what else could be necessary - and got a message that you do not have enough free space on your drive meanwhile you had lot of free space on the chosen mounted partition? You will learn the problem and hopefully the solution too. (Of course I suppose it is not the real situation you have no enough space. If so do not read ahead.) The problem is that wine does not check the amount of free space on the mounted partition corresponds to the selected directory but reports the free on the root of the directory the partition mounted to . ;( Probably it is not clean so here is an example: Let say you have / only and something is mounted as /mnt/part1 If you directly select /mnt/part1 during installation wine will check free space in fact on / and does not calculate free on the partition mounted under /mnt/part1. How to solve it you may ask? It is easy. Start winecfg and create a new drive with the directory you want to use....

User based queue mapping for Capacity Scheduler

When I  started to use Capacity Scheduler hierarchical queue features on top of Hortonworks' HDP 2.0 I have immediately realized that I need automatic assignment of job to queue based on username. Sounds easy and useful? Yes! But could not find any configuration parameter and example for that. I found only references to use mapred.job.queuename config option. This can be configured in HIVE via set mapred.job.queuename=yourqueue or using -Dmapred.job.queuename=yourqueue as a hadoop command argument. After some hours of unavailing googling I have checked the corresponding code part and have been shocked. This is available only since HADOOP-2.6 (HDP-2.2). Check YARN-2411 for details. According to the CHANGELOG this is a relatively new feature. So sadly this is not available to me until an upgrade. :( See below an example based on YARN-2411 to use it in Hadoop 2.6 or higher for Hortonworks HDP-2.2 1. user1 is mapped to queue1, group1 is mapped to queue2: yarn.schedul...

Ansible ec2 module "region must be specified" issue

Some month ago I made an Ansible based autoinstall for Hortonwork's HDP 2.2. Since HDP 2.2.4.2 is out I wanted to update my install process and test how it works. However I had to realize that my previously working ansible playbooks are failing with an error message. TASK: [Launching Ambari instance] ********************************************* failed: [localhost] => {"failed": true} msg: region must be specified FATAL: all hosts have already failed -- aborting First I have checked my ansible, eucalyptus and boto config. However everything was fine. So I have checked the code of the ec2 module of ansible and found the error message in the source. # tail -n +1205 /usr/share/pyshared/ansible/modules/core/cloud/amazon/ec2.py|head -17 ec2 = ec2_connect(module) ec2_url, aws_access_key, aws_secret_key, region = get_ec2_creds(module) if region: try: vpc = boto.vpc.connect_to_region( region, aws_access_key_id=aws_access_key, aws_secret_access_key=aws_secr...