Hitachi Vantara Pentaho Community Wiki
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 83 Next »

Resources


Welcome to the Big Data space in the Pentaho Community wiki. This space is the community home and collection point for all things Big Data within the Pentaho ecosystem. It is the place to find documentation, how-to's, best practices, use-cases and other information about employing Pentaho technology as part of your overall Big Data Strategy. It is also where you can share your own information and experiences. We look forward to your participation and contribution!

Overview

Pentaho's Big Data story revolves around Pentaho Data Integration AKA Kettle. Kettle is a powerful Extraction, Transformation and Loading (ETL) engine that uses a metadata-driven approach. The kettle engine provides data services for, and is embedded in, most of the applications within the Pentaho BI suite from Spoon, the Kettle designer, to the Pentaho report Designer. Check out About Kettle and Big Data for more details of the Pentaho Big Data Story.

News and Information

  • Deep hands-on training FREE for attendees at the 2012 Strata Conference in Santa Clara, California. Sign-up for our how-to training session on February 28th during the 'Tuesday Tutorials.' - Details
  • Pentaho Big Data components are now open source - In order to play well within the Hadoop open source ecosystem and make Kettle be the best and most pervasive ETL engine in the Big Data space, Pentaho has put all of the Hadoop and NoSQL components into open source starting with the 4.3 release.
  • Kettle license moves to Apache - To further Kettle adoption within the Hadoop community, Pentaho had decided to move the Kettle open source license from LGPL to the more permissive Apache license. This will remove the issue of what restrictions are applied to a derivative work based on combining Kettle with Hadoop.
  • 4.3 Pre-Release of Kettle with the new Big Data components is now available available for download:
  • First set of Big Data How-To's Published - Check out the How-To's for Hadoop, MapR, Cassandra and MongoDB here.

Getting Started

It's easy to get started with Pentaho for Big Data.

  1. Watch the intro videos below.
  2. Read about Kettle and Big Data.
  3. Download and configure the software here.
  4. Try the How To's for yourself.
  5. Join the Pentaho Big Data forum and let us know how you are using Big Data, ask questions and give feedback.
  6. Tell all your friends and neighbors

Intro Videos

    The first three videos compare using Pentaho Kettle to create and execute a simple MapReduce job with using Java to solve the same problem. The Kettle transform shown here runs as a Mapper and Reducer within the cluster.

    What would the same task as "1) Pentaho MapReduce with Kettle" look like if you coded it in Java? At a half hour long, you may not want to watch the entire video...

    This is a quick summary of the previous two videos, "1) Pentaho MapReduce with Kettle" and "2) Straight Java", and why Pentaho Kettle boosts productivity and maintainability.

    A quick example of loading into the Hadoop Distributed File System (HDFS) using Pentaho Kettle.

    A quick example of extracting data from the Hadoop Distributed File System (HDFS) using Pentaho Kettle.

    • No labels