The Pentaho Big Data Initiative
This wiki space is the community home and collection point for all things "Big Data" within the Pentaho ecosystem. This is the place to find documentation, how-to's, best practices, use-cases and other information about employing Pentaho technology as part of your overall Big Data Strategy. It is also where you can share your own information and experiences using Pentaho Big Data technology. We look forward to your participation and contribution!
Pentaho's Big Data story revolves around Pentaho Data Integration AKA Kettle. Kettle is a powerful Extraction, Transformation and Loading (ETL) engine that uses a metadata-driven approach. The kettle engine provides data services for and is embedded in many of the applications within the Pentaho BI suite. Kettle comes with a graphical, drag and drop design environment for designing and running Kettle Jobs and Transformations. (ADD LINK FOR MORE DETAILS)
Using Kettle with Hadoop
Kettle can be used across a wide spectrum of use cases within the context of Hadoop. Kettle Jobs can be used to orchestrate
This is a closed wiki space
The only people with access are Pentaho Employees and Dave Reinke (Chris will need to sign up for the wiki and send me his user id)
This is a first shot at getting an open source collaboration space for Big Data. It will eventually be open but is currently a work in progress and a place to put the use cases, demo's etc. I completely pulled the structure and initial content from my arse and am not in love with any of it. It is a round lump of clay, waiting to be molded by the brilliant minds of the Big Ass Data Team.