You can use the Job job entry to execute a previously defined job.
For ease of use, it is also possible to create a new job within the dialog, pressing the New Job button.
Use the Job entry to execute a previously defined job. This allows you to perform "functional decomposition." That is, you use them to break out jobs into more manageable units. For example, you would not write a data warehouse load using one job that contains 500 entries. It is better to create smaller jobs and aggregate them.
Warning: Although it is possible to create a recursive, never ending job that points to itself, you should be aware that such a job will eventually fail with an out of memory or stack error.
Transformation Specification tab
Name of the Job Entry
The unique name of the job entry on the canvas. A job entry can be placed on the canvas several times; however it will be the same job entry
If you are not working in a repository, specify the XML file name of the transformation to start. Click to browse through your local files.
Specify by Name and Directory
If you are working in the DI Repository or database repository, specify the name of the transformation to start. Click the button to browse through the DI Repository.
Specify by Reference
If you specify a transformation or job by reference, you can rename or move it around in the DI Repository. The reference (identifier) is stored, not the name and directory.
Copy previous results to args?
The results from a previous transformation can copied as arguments of the job using the "Copy rows to result" step. If Execute for every input row is enabled then each row is a set of command line arguments to be passed into the job, otherwise only the first row is used to generate the command line arguments.
Copy previous results to parameters?
If Execute for every input row is enabled then each row is a set of command line jobarguments to be passed into the , otherwise only the first row is used to generate the command line arguments.
Execute for every input row?
Implements looping; if the previous job entry returns a set of result rows, the job executes once for every row found. One row is passed to the job at every execution. For example, you can execute a job for each file found in a directory.
Remote slave server
The slave server on which to execute the job
Pass job export to slave
Pass the complete job (including referenced sub-jobs and sub-transformations) to the remote server.
Wait for the remote job to finish?
Enable to block until the job on the slave server has finished executing
Follow local abort to remote job
Enable to send the abort signal to the remote job if it is called locally
Expand child jobs and transformations on the server
When the remote job starts child jobs and transformations, they are exposed on the slave server and can be monitored.
Logging Settings tab
By default, if you do not set logging, Pentaho Data Integration will take log entries that are being generated and create a log record inside the job. For example, suppose a job has three transformations to run and you have not set logging. The transformations will not output logging information to other files, locations, or special configuration. In this instance, the job executes and puts logging information into its master job log.
In most instances, it is acceptable for logging information to be available in the job log. For example, if you have load dimensions, you want logs for your load dimension runs to display in the job logs. If there are errors in the transformations, they will be displayed in the job logs. If, however, you want all your log information kept in one place, you must set up logging.
Enable to specify a separate logging file for the execution of this job
Enable to append to the logfile as opposed to creating a new one
Name of logfile
The directory and base name of the log file; for example C:\logs
Create parent folder
Create the parent folder for the log file if it does not exist
Extension of logfile
The file name extension; for example, log or txt
Include date in logfile?
Adds the system date to the filename with format YYYYMMDD (_20051231).
Include time in logfile?
Adds the system time to the filename with format HHMMSS (_235959).
Specify which command-line arguments will be passed to the transformation.
Specify which parameters will be passed to the transformation:
Pass all parameter values down to the sub-transformation
Enable this option to pass all parameters of the job down to the sub-transformation.
Specify the parameter name that will be passed to the transformation.
Stream column name
Allows you to capture fields of incoming records of a result set as a parameter.
Allows you to specify the values for the transformation's parameters. You can do this by: