Hitachi Vantara Pentaho Community Wiki
Child pages
  • Writing PDI Plugin OSGI Bundles
Skip to end of metadata
Go to start of metadata

Developing Bundles

A PDI-enabled bundle is just a standard OSGI bundle which contains a Blueprint Beans file declaring the PDI plugin classes. While Blueprint is only one of many different ways of registering OSGI Service classes, it's the only one supported by the Bridge today. The reason for this has to due with the fact that PDI's PluginRegistry is not only a service locator, but also a factory for Object types (Metas). We may extend support to other registration mechanisms in the future if there's interest from the community.

Generating a Bundle

The easiest way to create a bundle is to create a maven project using maven-bundle-plugin or to base your project on one of our examples. Apache Karaf supplies a Maven Archtype which will automatically build a skeleton project for you.

mvn archetype:generate \
    -DarchetypeGroupId=org.apache.karaf.archetypes \
    -DarchetypeArtifactId=karaf-blueprint-archetype \
    -DarchetypeVersion=2.2.11 \
    -DgroupId=com.mycompany \
    -DartifactId=com.mycompany.blueprint \
    -Dversion=1.0-SNAPSHOT \

Blueprint File

As with any OSGI Bundle using the Blueprint Beans Container, the beans file must be placed within the OSGI-INF/blueprint folder of the jar. If you use the archtype project generator, one will be created ready for you to customize. The name of the blueprint beans file is not important, only the location. It's also possible to have more than one beans file per bundle though they will be treated separately.

PDI plugins with annotated classes can take advantage of the Pentaho Namespace extensions for blueprint. This is the recommended approach for developing PDI plugins in OSGI. You don't have to use the annotations, but not using them will make the XML configuration much more verbose. Details on how to use the PDI Plugin Annotations are available here: Writing your own Pentaho Data Integration Plug-In

Example using namespace extensions:

In this example the StepBeforeStartMonitor class has the Annotations required for ExtensionPoint PluginTypes. Notice you have to pass the type of the plugin to the <pen:di-plugin> tag, but otherwise everything else will be handled automatically for you.

Simple Example
<blueprint xmlns=""

  <bean id="extension_plugin_1" class="org.pentaho.di.monitor.step.StepBeforeStartMonitor">
    <pen:di-plugin type="org.pentaho.di.core.extension.ExtensionPointPluginType"/>


Step plugins require a little more configuration as multiple classes are involved to support a Step Plugin. You must provide a mapping from the Kettle class to a bean within the blueprint file servicing it for the plugin.

Step Example
<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns=""
           xmlns:ext="" default-timeout="20000">

  <bean id="YamlInput_StepMeta" class="org.pentaho.di.plugins.examples.step.YamlInputMeta" scope="prototype">
    <pen:di-plugin type="org.pentaho.di.core.plugins.StepPluginType">

      <!-- Reference the extra class for the StepData -->
      <pen:di-plugin-mapping class="org.pentaho.di.trans.step.StepDataInterface" ref="YamlInput_StepData"/>

  <bean id="YamlInput_StepData" class="org.pentaho.di.plugins.examples.step.YamlInputData" scope="prototype"/>


It is possible to register DI plugins without using the annotations and the namespace extensions. However there's no advantage to doing so. The example below is included for the rare case that this is required.

Non-Annotated Registration
<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns=""
           xmlns:ext="" default-timeout="20000">

  <bean id="YamlInput_StepMeta" class="org.pentaho.di.plugins.examples.step.YamlInputMeta" scope="prototype"/>

  <bean id="YamlInput_StepData" class="org.pentaho.di.plugins.examples.step.YamlInputData" scope="prototype"/>

  <!-- Actual Plugin Class PluginInterface, references the Spoon Plugin above -->
  <bean id="YamlPlugin" class="org.pentaho.di.osgi.OSGIPlugin">
    <property name="mainType" value="org.pentaho.di.trans.step.StepMetaInterface"/>
    <property name="name" value="Yaml Input From Osgi"/>
    <property name="ID" value="YamlInput"/>
    <property name="imageFile" value="YamlI.png"/>
    <property name="description" value="i18n:org.pentaho.di.trans.step:BaseStep.TypeLongDesc.YamlInput"/>
    <property name="pluginTypeInterface" value="org.pentaho.di.core.plugins.StepPluginType"/>
    <property name="category" value="i18n:org.pentaho.di.trans.step:BaseStep.Category.Input"/>
    <!--<property name="tooltip" value="i18n:org.pentaho.di.trans.step:BaseStep.TypeTooltipDesc.YamlInput"/>-->
    <property name="classToBeanMap">
        <entry key="org.pentaho.di.trans.step.StepMetaInterface" value="YamlInput_StepMeta"/>
        <entry key="org.pentaho.di.trans.step.StepDataInterface" value="YamlInput_StepData"/>

  <!-- Register plugin with the Service Registry -->
  <service id="pluginService" interface="org.pentaho.di.core.plugins.PluginInterface" ref="YamlPlugin">
      <entry key="PluginType" value="org.pentaho.di.core.plugins.StepPluginType"/>


Package Imports

If you're relying on BND or the maven-bundle-plugin, it may fail to add packages required to make the Pentaho Namespace work. If so manually define the following imports: org.pentaho.di.osgi, org.pentaho.di.core.plugins


Handling Dependent Jars

There are a couple options for handling dependencies. You can embed the dependent libraries within the bundle. Or you can build a Karaf Feature for your plugin which installs your plugin bundle and all dependencies atomically. If you do go the feature route another option available for deployment is packaging together your feature file and all of its dependencies into a KAR (Karaf Archive). This allows single artifact deployment of your bundle and all of it's dependencies. KARs are the recommended deployment package for plugin developers.

Embedded Dependencies

OSGI Bundles support embedding dependent jars within the bundle. The result is commonly referred to as a "super-bundle" or "fat-bundle". The maven-bundle-plugin automates this process as detailed Here in the "Embedding dependencies" section. While this option will work, any library embedded within a bundle is only usable by that bundle. It's much better to use either of the following options which allow such dependent libraries to be shared between bundles.

Feature Files

Karaf eases the burden of deploying many different OSGI bundles which collectively support some feature, in our case PDI plugins, by allowing you to describe all dependent bundles in an XML document and assigning them a Feature name See Karaf Site for more detail. Once your Feature XML file is properly installed within the Karaf instance, it can be started and stopped atomically. You can also configure Karaf to start your feature automatically while PDI is initializing.

While you can create Feature files manually, Karaf provides a Maven plugin to automatically create a Feature XML based on Maven dependencies see Karaf's documentation for more information. The Step example project includes a sub-module which builds a Feature using this technique.

KAR File

KARs are simply zip files which contain your Feature XML and all dependent bundles. Karaf supplies a Maven plugin to automate the creation of KARs from Feature XML files. The Step example project demonstrates the usage of this plugin. For more information see Karaf's documentation

Available Examples

There are a couple of examples available in the PDI-OSGI-Bridge repository:
TODO: Add more examples of real plugin

  • No labels