Hitachi Vantara Pentaho Community Wiki
Child pages
  • Transforming Data within Hive in MapR

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Wiki Markup
{scrollbar}
{excerpt} How to read data from a Hive table, transform it, and write it to a Hive table within the workflow of a PDI job.{excerpt}
h1.Prerequisites
In order follow along with this how-to guide you will need the following:
* MapR
* Pentaho Data Integration
* Hive

h1.Sample Files
The source data for this guide will reside in a Hive table called weblogs.    If you have previously completed the [Loading Data into Hive] guide, then you can skip to [Create#Create a Database Connection to Hive].   If not, then you will need the following datafile and perform the [Create a Hive Table] instructions before proceeding.
The sample data file needed for the [Create a Hive Table] instructions is:
|File Name|Content|
|How To's^weblogs_parse.txt|Tab-delimited, parsed weblog data|
\\
NOTE: If you have previously completed the [Using Pentaho MapReduce to Parse Weblog Data] guide, then the necessary files will already be in the proper location.
This file should be placed in the /weblogs/parse directory of the CLDB using the following commands.
{code}
hadoop fs –mkdir /weblogs
hadoop fs –mkdir /weblogs/parse
hadoop fs –put weblogs_parse.txt /weblogs/parse/part-00000
{code}
h1.Step-By-Step Instructions
h2.Setup
Start MapR if it is not already running.
Start Hive Server if it is not already running.
h2.Create a Hive Table
NOTE: This task may be skipped if you have completed the [Loading Data into Hive] guide.
\\
\\
# *Open the Hive Shell*: Open the Hive shell so you can manually create a Hive table by entering 'hive' at the command line.
\\
\\
# *Create the Table in Hive:* You need a hive table to load the data to, so enter the following in the hive shell.
{code}
create table weblogs (
    client_ip    string,
    full_request_date string,
    day    string,
    month    string,
    month_num int,
    year    string,
    hour    string,
    minute    string,
    second    string,
    timezone    string,
    http_verb    string,
    uri    string,
    http_status_code    string,
    bytes_returned        string,
    referrer        string,
    user_agent    string)
row format delimited
fields terminated by '\t';
{code}
# *Close the Hive Shell*: You are done with the Hive Shell for now, so close it by entering 'quit;' in the Hive Shell.
\\
\\
# *Load the Table:* Load the Hive table by running the following commands:
{code}
hadoop fs –put part-00000.txt /user/hive/warehouse/weblogs/
{code}

{anchor:Create a Database Connection to Hive]
h2.Create a Database Connection to Hive
If you already have a shared Hive Database Connection defined within PDI then this task may be skipped.

# *Start PDI on your desktop.* Once it is running choose 'File' -> 'New' -> 'Job' from the menu system or click on the 'New file' icon on the toolbar and choose the 'Job' option.
\\
\\
# *Create a New Connection*: In the View Palette right click on 'Database connections' and select 'New'.
!worddav70b90c28893c85b16663ae5061336e4d.png|height=114,width=274!
\\
\\
# *Configure the Connection:* In the Database Connections window enter the following:
## Connection Name: Enter 'Hive'
## Connection Type: Select 'Hadoop Hive'
## Host Name and Port Number: Your connection information.  For local single node clusters use 'localhost' and port '10000'.
## Database Name: Enter 'Default'
\\
When you are done your window should look like:
!worddav13ff02ebd5d8970ed6c3d11822af99b7.png|height=412,width=442!
\\
Click 'Test' to test the connection.
\\
If the test is successful click 'OK' to close the Database Connection window.

h2. Create a Job to Aggregate Web Log Data into a Hive Table
In this task you will create a job that runs a Hive script to build an aggregate table, weblogs_agg, using the detailed data found in the Hive weblogs table.  The new Hive weblogs_agg table will contain a count of page views for each IP address by month and year.
{tip:title=Speed Tip}You can download the Kettle Job [^aggregate_hive.kjb] already completed{tip}

# *Start PDI on your desktop.* Once it is running choose 'File' -> 'New' -> 'Job' from the menu system or click on the 'New file' icon on the toolbar and choose the 'Job' option.
\\
\\
# *Add a Start Job Entry:*  You need to tell PDI where to start the job, so expand the 'General' section of the Design palette and drag a 'Start' node onto the job canvas.  Your canvas should look like:
!worddav6e0f1fce79f328118e06c69ef21ded39.png|height=284,width=511!
\\
\\
# *Add a SQL* *Job Entry:*  You are going to run a HiveQL script to create the aggregate table, so expand the 'Scripting' section of the Design palette and drag a 'SQL' node onto the job canvas.  Your canvas should look like:
!worddav2c137fe2d6971bd7c3938773528a1bf0.png|height=276,width=352!
\\
\\
# *Connect the Start and SQL Job Entries*: Hover the mouse over the 'Start' node and a tooltip will appear. !worddav2fed9f38610463139fe67ad3a5a50e04.png|height=56,width=59! Click on the output connector (the green arrow pointing to the right) and drag a connector arrow to the 'SQL' node. Your canvas should look like this:
!worddav92f6ca9a587cca6d08b6f765d7c64361.png|height=130,width=199!
\\
\\
# *Edit the SQL Job Entry{*}: Double-click on the 'SQL' node to edit its properties. Enter this information:
## Connection: Select 'Hive'
## SQL Script: Enter the following
{code}
create table weblogs_agg
as
select
  client_ip
, year
, month
, month_num
, count(*) as pageviews
from weblogs
group by   client_ip, year, month, month_num
{code}
When you are done your window should look like:
!worddavda0993468c9a962de8df552293080746.png|height=377,width=485!
\\
Click 'OK' to close the window.
\\
\\
# *Save the Job*: Choose 'File' -> 'Save as...' from the menu system. Save the transformation as 'aggregate_hive.kjb' into a folder of your choice.
\\
\\
# *Run the Job*: Choose 'Action' -> 'Run' from the menu system or click on the green run button on the job toolbar. A 'Execute a job' window will open. Click on the 'Launch' button. An 'Execution Results' panel will open at the bottom of the PDI window and it will show you the progress of the job as it runs. After a few seconds the job should finish successfully: !worddavb2a006f6fac55a6c75c73e26856b0f70.png|height=193,width=465!
\\
If any errors occurred the job step that failed will be highlighted in red and you can use the 'Logging' tab to view error messages.

h2.Check Hive
# *Open the Hive Shell:* Open the Hive shell so you can manually create a Hive table by entering 'hive' at the command line.
# *Query Hive for Data:* Verify the data has been loaded to Hive by querying the weblogs table.
{coe}
select * from weblogs_agg limit 10;
{code}
\\
\\
# *Close the Hive Shell:* You are done with the Hive Shell for now, so close it by entering 'quit;' in the Hive Shell.

h1.Summary
During this guide you learned how to transform data within Hive within a PDI job flow.