Settings
(1) Query DB meta information: This is used to retrieve information about tables and columns during editing. To prevent long loading times, you can deactivate DB helper functions here. Note: If you change this setting, you should reload an open task so that you receive updated information.
(2) ID: The ID of the pipeline. With the folder icon on the right, you can display the search index of the ETL/ELT pipeline. These values can be queried in the expert search.
(3) Documentation URL: In this field any URL that contains a documentation of this ETL/ELT pipeline can be specified. It be called up with the icon on the right. Additionally, system constants can be inserted into the URL there.
(6) Cache input files up to: Files to be read in child tasks that are smaller than the specified size are kept in memory (and not on disk).
(7) Log level in ETL/ELT script: Defines which information is logged. Attention: If debug information is logged, the contents of all ETL/ELT fields and ETL/ELT variables are written per run, resulting in a very large amount of data. So use this mode only if necessary.
(8) ETL/ELT link: Relevant for REST calls, but not active in the standard installation.
(9) Process type: The processing of the ETL/ELT pipeline can take place locally or on a remote server (to conserve resources of the main system).
Important note:
Files to be processed by the remote server must also be located on the remote server and the paths in the ETL/ELT pipeline must correspond to those of the remote server. For example, if the file is located remotely at /opt/input/file.csv and locally at /opt/Lobster/data/webapps/root/upload/file.csv, then the path /opt/input/file.csv must be specified as the input source in the ETL/ELT pipeline.
Likewise, destination files are saved on the remote server.
I. e. one must either mount the folders of the remote server on the Integration Server, or vice versa, so that the files can be processed directly. Alternatively, the distribution of the files can also be organised with profiles (e.g. in workflows).
If you use databases in the ETL/ELT pipeline, you must also have access to them on the remote server. I.e. you have to adapt the configuration file ./etc/database.xml of the used Lobster Bee.
(10) May only run in one instance (strict serial processing): If set, multiple instances of this ETL/ELT pipeline cannot run at the same time. So if there is already a job running for this pipeline, you cannot start another one.
(11) Preserve Null values in functions: When selecting this checkbox, read-in NULL values will not be replaced by an empty string in function chains.
(12) Clipboard file based (otherwise in memory): If this checkbox is set, the clipboard is kept in a file and not in memory.
(13) Load backup: A backup can be loaded here, see also (16).
(14) Save & start: The pipeline is saved and started. Note: Variables that can be changed at startup are ignored here.
(15) Save & test run: The pipeline is saved and a test run is started . The input data can be limited in a further dialogue. Note: Variables that can be changed at startup are ignored here.
(16) Save & close: The pipeline is saved and closed. In addition, a backup is created, see also (13).
(17) Image: You can upload an image for this entry. This image is shown in the overview when the tile view is selected.