Hadoop Configurations, also known and shims and the Pentaho Big Data Adaptive layer, are collections of Hadoop libraries required to communicate with a specific version of Hadoop (and related tools: Hive, HBase, Sqoop, Pig, etc.). They are designed to be easily configured.
The Pentaho Big Data Plugin will use the Hadoop configuration defined in it's plugin.properties file to communicate with Hadoop. By default, the hadoop-20 configuration is used. You should update this property to match the Hadoop configuration you wish to use when communicating with Hadoop:
Hadoop configurations reside in pentaho-big-data-plugin/hadoop-configurations. They all share a basic structure:
Sometimes it's not enough to simply copy an existing compiled configuration to communicate with a specific cluster. Occasionally all code that interfaces with Hadoop libraries must be compiled (relinked) with the new libraries.
New configurations can be created by identifying the configuration that most closely matches the version of Hadoop you wish to communicate with, copying it and swapping out all jar files in the lib/ directory to match the cluster you want to communicate with. If you compare the default configurations included the differences are apparent
We support various versions of the most common distributions. The best way to see the list would be to refer to the github repo itself: https://github.com/pentaho/pentaho-hadoop-shims but here are the highlights:
- Apache Hadoop 0.20 -- Plain vanilla distro enabled by default
- Cloudera -- Earliest version supported was cdh3u4. We support several dot releases under CDH4 as well as CDH5.
- MRv1 -- Specific configuration changes can be made to the cdh5 version of this shim to submit MapReduce jobs using MRv1 instead of the default, MRv2.
- HortonWorks -- We support hdp12, hdp13, and hdp20 so far.
- MapR -- We support several dot releases of mapr2 as well as mapr30 and mapr31. Further we have also provided initial support for MapR under Windows.
- There is a special page in the wiki that provides detailed configuration settings for MapR on the different major platforms.
- Intel -- We support the idh23 distribution that Intel released before dropping their distribution.
The pentaho-hadoop-shims-api project provides the API/SPI for developing a shim implementation. A Hadoop configuration is a combination of shim implementation and supporting metadata and libraries. The following SPIs exist for interfacing with Hadoop-related libraries:
- org.pentaho.hadoop.shim.spi.HadoopShim: Hadoop-related functions including HDFS, Hadoop Configuration, and Hive JDBC driver
- org.pentaho.hadoop.shim.spi.SqoopShim: Ability to execute Sqoop tools
- org.pentaho.hadoop.shim.spi.PigShim: Simple interface for executing Pig scripts
Defauilt implementations are provided for all shims as well as supporting objects.
SPIs are registered via Java's ServiceLoader mechanism (META-INF/services/<interface-name> files whose contents are the concrete implementations)
Hadoop configurations are loaded with a special class loader that will delegate loading of resources to the configuration's directory (and configured classpath) before walking up the class loader hierarchy. The class loading scheme closely resembles that of an application server's.
The config.properties is a way of defining a friendly name for the configuration as well as additional classpath and native libraries that the configuration requires. See this file's in-line comments for more details.
A shim project relies upon a set of common source, test, resource, and build scripts so reduce the amount of code duplication. They are built with Subfloor.
The common source and tests are implementations that are common across all 0.20-based Hadoop configurations. For now this covers all of our configurations (including CDH4 as well as any 1.x configurations). The common build script (common-shims-build.xml) overrides subfloor built targets to include the common source files where necessary. The build.xml in the root of the shims directory provides a simple place to execute all shim module build scripts from one location (an attempt at a multi-module "project" script).
The shim projects are Ant-based projects that rely on Subfloor. To build the project:
The resolve target will preload Apache Ivy and download all jar dependencies required for the project. The dist target will compile, jar, and package the configuration.
This package is then what's used during the Pentaho Big Data Plugin project's assembly phase and extracted into pentaho-big-data-plugin/hadoop-configurations/.
To use your new shim plugin extract the packaged tar.gz or zip archive from the dist directory of your shim project into the hadoop-configurations folder within the Big Data Plugin and update the plugin.properties's active.hadoop.configuration property to match the folder name (the identifier) of your new shim.
The pentaho-hadoop-shims repo in GitHub contains the core API and SPI classes for Hadoop interaction, a "common" set of implementations from which most shims extend, and directories for each supported distribution version.
Each shim folder contains the distribution specific libraries, configuration settings, and SPI implementations. They all use subfloor to compile and package a shim for deployment.
A packaged shim has the following structure: