Updated rhs-hadoop and rhs-hadoop-install packages that adds many enhancementsand fix multiple bugs are now available for Red Hat Storage 3.

Red Hat Storage is a software only scale-out storage solution that providesflexible and affordable unstructured data storage. Red Hat Storage 3.0unifies data storage and infrastructure, increases performance, and improves availability and manageability to meet enterprise-level storagechallenges. Red Hat Storage provides compatibility for Apache Hadoop andit uses the standard file system APIs available in Hadoop to provide a new storage option for Hadoop deployments.* rhs-hadoop component is the Hadoop file system plugin for Red Hat Storage3.0. It has undergone a number of recent changes in order to add supportfor the use of more than one RHS volume with Hadoop.* rhs-hadoop-install component is the installer for rhs-hadoop for Red HatStorage 3.0. It has undergone a significant re-write to support both newand existing Red Hat Storage clusters as well as support the use ofmultiple volumes.This advisory adds many enhancements and fix multiple bugs: * Previously, executing script setup_cluster.sh script copied all files to/tmp/bin directory instead of using the existing scripts located in thenode from rhs-hadoop-install package. Now, with this release, the filesare not copied and you must run “yum install rhs-hadoop-install” commandon each node.(BZ#1131171)* Previously, the rhs-high-thoughput profile was not enabled by thesetup_cluster.sh script. Now, with this release, –profile allows enablingrhs-high-thoughput profile by setup_cluster.sh script. (BZ#1139458)* The TEZ service is now supported from Hortonworks Data Platform 2.1 inRed Hat Storage 3.0 for Hadoop stack. (BZ#1139710)* The HBase service is now supported on Hortonworks Data Platform 2.1 inRed Hat Storage 3.0 for Hadoop workloads. (BZ#1139717)* Previously, the default volume for Hadoop was not defined inenable_vol.sh. Now, with this release, specifying “–make-default”in enable_vol.sh enables the specified volume to be the default volumefor Hadoop. (BZ#1141348)* Previously, executing setup_cluster.sh script reinstalled Ambari evenif Ambari was running. Now, with this release, executing setup_cluster.shscript does not reinstall Ambari. (BZ#1151211)* A new option has been introduced that enables users to specify an Ambarirepository URL which can be different from the repository URL used bysetup_cluster.sh. With this fix, –ambari-repo <url> option allows usersto install and use new Ambari repository files. (BZ#1151215)* A new auxillary script has been added to bin directory which supportsall actions to modify hadoop configuration files. With this script, users can now add or delete core-site keys and modify core-site propertyvalues. (BZ#1151219)All users of Red Hat Storage are advised to install these updated packages.
Before applying this update, make sure all previously released erratarelevant to your system have been applied.This update is available via the Red Hat Network. Details on how touse the Red Hat Network to apply this update are available athttps://access.redhat.com/articles/11258
1064248 – missing apps/hive/warehouse1065417 – [BREW] rhs-hadoop-2.1.6-2 rpmlint check – some minor errors1080241 – rhs-hadoop-install deletes files in the gluster volume and the volume itself.1082086 – UIDs across cluster not verified on mgmt-node when it’s outside the storage pool1082695 – install.sh should never delete files in the gluster volume1082718 – install.sh does not handle the case of “hadoop” group users in parsing “getent group hadoop”1082798 – add hive and hcat users; add app/ and app/webhcat/ directories1084239 – NullPointerException is mapreduce.jobtracker.system.dir undefined.1101937 – wrong error message for mount points configuration1102467 – listStatus corrupts : file paths with % and other URI Escape encodings are corrupted1102800 – Gluster file names with special characters not seen by hadoop fs -ls1109910 – required users created by setup_cluster.sh do not have same uid across cluster1113023 – create_vol.sh should do all validation(wrong argument count etc) before running volume create command1113609 – create_vol.sh do volume mount multiple times if it has more than one brick.1114585 – rhs-node listed as rhs_node in enable_vol.sh help text1118623 – [doc] Need a Configuration/Installation steps/script in case of removing brick(and also removing node from cluster) from hadoop enabled volume and cluster.1122905 – some problems in rpmlint check output1122909 – [BREW] incorrect package content description1139458 – add to the installer rhs-high-throughput profile, on each storage node1139717 – [RFE] Support HBase from HDP 2.1 in RHSS 3.0+1141348 – [RFE] add ability to define the default volume in enable_vol.sh1151211 – [RFE] setup_cluster should not, by default, yum install ambari (agent/server) when already installed1151219 – [RFE] complete the actions supported by bin/ambari_config_update.sh (add, delete, replace, remove)

These packages are GPG signed by Red Hat for security. Our key and
details on how to verify the signature are available from:

Leave a Reply