How to use HBase TableInputFormat in Pentaho MapReduce.
This guide explains how to configure Pentaho MapReduce to use the TableInputFormat for reading data from HBase and how to configure a map-reduce transformation to process that data using the HBaseRowDecoder step.
In order to follow along with this how-to guide you will need the following:
- Hadoop configured to access HBase
- Pentaho Data Integration
The HBaseRowDecoder step is designed specifically for use in map-reduce transformations in order to decode the key and value data that is output by the TableInputFormat. The key output is the row key from HBase and the value is an HBase "Result" object containing all the column values for the row in question.
First configure a Pentaho MapReduce input step by specifying that both the incoming key and value fields have type "Serializable".
Next specify the incoming row key and HBase result fields in the HBaseRowDecoder step.
Finally, define or load a mapping using the Mapping editor tab.
Once defined (or loaded), this mapping is encapsulated in the transformation meta data.
To ensure that input splits are created using the TableInputFormat, configure the Input Format and Input Path fields of the Job Setup tab as shown in the following screenshot.
The following table shows various properties that can be supplied in the User Defined tab of the step in order to configure the scan performed by the TableInputFormat. Entries shown in bold are mandatory.
|hbase.mapred.inputtable|| Name of the HBase table to read from
|| Space delimited list of columns in ColFam:ColName format (ColName can be ommitted to read all columns from a family)
|| Number of rows for caching that will be passed to scanners
|| Time stamp used to filter columns with a specific time stamp
|| Starting time stamp to filter in a given time stamp range
|| End time stamp to filter in a given time stamp range