Hitachi Vantara Pentaho Community Wiki
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 21 Next »

Welcome to the Big Data space in the Pentaho Community wiki. This space is the community home and collection point for all things Big Data within the Pentaho ecosystem. It is the place to find documentation, how-to's, best practices, use-cases and other information about employing Pentaho technology as part of your overall Big Data Strategy. It is also where you can share your own information and experiences. We look forward to your participation and contribution!

Expectations - If you are unfamiliar with open source, this article is a good place to start. The open source community thrives on participation and cooperation. There are several communication channels available where people can help you, but they are not obligated to do so. You are responsible for your own success which will require time, effort and a small amount technical ability. If you prefer to have a relationship with a known vendor who will answer questions over the phone, help you during your evaluation and support you in production; please visit www.pentaho.com.

Unknown macro: {table}
Unknown macro: {tr}
Unknown macro: {td}
Unknown macro: {roundrect}
  • [Downloads] - Get the code
  • CI Builds - Last Dev Build (unstable)
  • *How-To's- Get me started

Overview

Pentaho's Big Data story revolves around Pentaho Data Integration AKA Kettle. Kettle is a powerful Extraction, Transformation and Loading (ETL) engine that uses a metadata-driven approach. The kettle engine provides data services for and is embedded in many of the applications within the Pentaho BI suite. Kettle comes with a graphical, drag and drop design environment for designing and running Kettle Jobs and Transformations.

A quick 2 min video of PDI in action

Kettle Transformations

A Kettle transformation consists of one or more steps that perform core ETL work like reading data in the form of rows from a file or database, filtering rows, calculating new columns and sending the new data stream somewhere else. All steps in a transform execute simultaneously (usually in separate threads) and data is passed from step to step in parallel. The data is operated on in a continuous stream without having to be fully read into memory or staged. The image to the right demonstrated a very simple kettle transformation - Read from a data source, do some transformation, in this case a filter and then write the data stream to another data source.

(IN WORK DM)

This is a closed wiki space

The only people with access are Pentaho Employees and Dave Reinke (Chris will need to sign up for the wiki and send me his user id)

This is a first shot at getting an open source collaboration space for Big Data. It will eventually be open but is currently a work in progress and a place to put the use cases, demo's etc. I completely pulled the structure and initial content from my arse and am not in love with any of it. It is a round lump of clay, waiting to be molded by the brilliant minds of the Big Ass Data Team.

  • No labels