Hitachi Vantara Pentaho Community Wiki
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 67 Next »

Resources


Welcome to the Big Data space in the Pentaho Community wiki. This space is the community home and collection point for all things Big Data within the Pentaho ecosystem. It is the place to find documentation, how-to's, best practices, use-cases and other information about employing Pentaho technology as part of your overall Big Data Strategy. It is also where you can share your own information and experiences. We look forward to your participation and contribution!

Overview

Pentaho's Big Data story revolves around Pentaho Data Integration AKA Kettle. Kettle is a powerful Extraction, Transformation and Loading (ETL) engine that uses a metadata-driven approach. The kettle engine provides data services for, and is embedded in, most of the applications within the Pentaho BI suite from Spoon, the Kettle designer, to the Pentaho report Designer. Check out About Kettle and Big Data for more details of the Pentaho Big Data Story.

News and Information

Pentaho will be announcing on Monday morning EST January 30th that it is open sourcing it's big data components and moving Kettle to the Apache license Stay tuned for more information...

  • Pentaho Big Data components are now open source - In order to play well within the Hadoop open source ecosystem and make Kettle be the best and most pervasive ETL engine in the Big Data space, Pentaho has put all of the Hadoop and NoSQL components into open source starting with the 4.3 release.
  • Kettle license moves to Apache - To further Kettle adoption within the Hadoop community, Pentaho had decided to move the Kettle open source license from LGPL to the more permissive Apache license. This will remove the issue of what restrictions are applied to a derivative work based on combining Kettle with Hadoop.
  • 4.3 Pre-Release of Kettle with the new Big Data components will be available for download on Jan 30, 2012 download:
  • First set of Big Data How-To's Published - Check out the How-To's for MapR Hadoop and Cassandra NoSQL Database here.

Intro Videos

    A quick introduction to executing Kettle transforms as a Mapper and Reducer within the cluster.

    A quick example of loading into the Hadoop Distributed File System (HDFS) using Pentaho Kettle.

    A quick example of extracting data from the Hadoop Distributed File System (HDFS) using Pentaho Kettle.

    • No labels