Welcome to the Big Data space in the Pentaho Community wiki. This space is the community home and collection point for all things Big Data within the Pentaho ecosystem. It is the place to find documentation, how-to's, best practices, use-cases and other information about employing Pentaho technology as part of your overall Big Data Strategy. It is also where you can share your own information and experiences. We look forward to your participation and contribution!

Overview

Pentaho's Big Data story revolves around Pentaho Data Integration AKA Kettle. Kettle is a powerful Extraction, Transformation and Loading (ETL) engine that uses a metadata-driven approach. The kettle engine provides data services for, and is embedded in, most of the applications within the Pentaho BI suite from Spoon, the Kettle designer, to the Pentaho report Designer. Check out About Kettle and Big Data for more details of the Pentaho Big Data Story.

News and Information

Getting Started

It's easy to get started with Pentaho for Big Data.

  1. Watch the intro videos below.
  2. Read about Kettle and Big Data.
  3. Download and configure the software here.
  4. Try the How To's for yourself.
  5. Join the Pentaho Big Data forum and let us know how you are using Big Data, ask questions and give feedback.

Intro Videos

A quick introduction to executing Kettle transforms as a Mapper and Reducer within the cluster.

KZe1UugxXcs

A quick example of loading into the Hadoop Distributed File System (HDFS) using Pentaho Kettle.

Ylekzmd6TAc

A quick example of extracting data from the Hadoop Distributed File System (HDFS) using Pentaho Kettle.

3Xew58LcMbg