Basic SQL (Input Agent cron)
The input data is read from a database using an SQL query.
If the Input Agent starts a job, it will process all the datasets that the SQL query returns. The result set of the database query is stored in CSV format as a backup file.
Settings
(1) DB alias: Alias for the connection to the database (selectable via icon on the right). A database alias stands for a configured connection to a database system. See section Databases/connectors (administration). In addition, all variables with prefix MSG_CALL_ defined in the profile are available for selection. This makes it possible to pass the alias from a triggering profile or with an HTTP call. Note: For easy testability, the variables should be defined with a default value.
(2) SQL timeout: Timeout for (6).
(3) Tables/Procedures/Views: Displays all tables and procedures for the selected alias (1).
(4) SQL Wizard: A simple wizard that can be used to generate the SELECT statement (6).
(5) Show SQL parameters: To display (7).
(6) SQL: SQL query to read data from the database. Only a single SELECT statement can be specified. Comments can only be specified with the syntax /*comment*/, otherwise an error will occur. Queries can be saved and saved queries can be selected via the context menu. There is another dialogue for managing the queries. You can use variables (syntax @MSG_CALL_MYVAR@), system constants (syntax %MYCONST%), parameters (see (7) for syntax) and placeholders for the last profile run timestamp in the query. In addition, permanent profile values are allowed (syntax %perm:KEYNAME%).
When using variables, it should be noted that only "MSG_CALL_" variables make sense here, because only these are initialised in phase 1. In addition, these variables must also have been created in the profile. A practical application example would be the use of HTTP request parameters in the SQL query, e.g. @MSG_CALL_PARAM1@. Another possibility is to pass values from preceding profiles in profile chains. Examples:
SELECT * FROM @MSG_CALL_MYTABLE@
SELECT * FROM %MY_TABLE_CONST%
SELECT * FROM orders WHERE order_no=@MSG_CALL_PARAM1@
SELECT * FROM products WHERE id=@1:s@
SELECT * FROM city WHERE zip=%perm:zip%
The placeholders for the timestamp of the last run of the profile are @0:t@ (complete timestamp), @0:d@ (date only) and @0:s#<template>@ (formatted string of the timestamp). The timestamp is updated with each run of the profile, but only if the placeholder is used in the query. The timestamp is then also updated if the cronjob was running but no job was generated. Only if there was an error in phase 1, the timestamp is not changed. Example:
SELECT * FROM orders WHERE act_time > @0:t@
SELECT * FROM orders WHERE act_time > @0:d@
SELECT * FROM orders WHERE act_time > @0:s#yyyyMMdd@
(7) Window on the right (SQL parameters): Is only displayed if (5) is set. Here you can define parameters (via the context menu) that can be used in the WHERE part of the query.
The values of these parameters can be fixed values, variables (syntax @MYVAR@), system constants (syntax %MYCONST%) and permanent profile values (syntax %perm:KEYNAME%). See selection menu in GUI.
The placeholders for these parameters have the syntax @<parameter number>:<data type>@, e.g. @1:s@. Allowed data types:
Abbreviation |
Data type |
i |
Integer |
f |
Float |
r |
Real |
d |
Date |
t |
Timestamp |
s |
String |
l |
Long |
v |
Boolean |
b |
Blob |
x |
Textstream |
In addition, a few special placeholders are available for the last run of the profile.
When using variables, it should be noted that only MSG_CALL_ variables make sense here, because only these are initialised in phase 1. In addition, these variables must also have been created in the profile. A practical application example would be the use of HTTP request parameters in the SQL query, e.g. @MSG_CALL_PARAM1@. Another possibility is to pass values from preceding profiles in profile chains.
Example:
Param nr: "1".
Value: "@MSG_CALL_PARAM1@".
(8) Timeout: Timeout for (9) and (10).
(9) Test SQL: Tests the SQL statement in (6). If the SQL statement contains a "*" for the names of the columns, the actual names of the columns are read from the database and entered in (6).
(10) Create structure: The columns of the SQL statement are created as fields below a new node in the source structure. Existing mapping instructions are retained. The data types and field lengths of the columns are taken from the database definition.
(11) Delete data: Causes all read data of the table to be deleted. If isolation level "Read Committed" is set, data can simultaneously be inserted by another program. Attention: This checkbox should only be used with extreme caution. If the number of deleted lines is different from the number of lines read, no job is started and an error is generated.
A WHERE condition can be used to optimise the deletion process. The WHERE condition from (6) is used with the same parameters. This is especially important if the data types do not match exactly.
(12) Blob fields are Base64 encoded: If this checkbox is set, the content of BLOB data fields (binary data) will be encoded with Base64 before passing to the profile. Note: Otherwise, binary data may cause an error when the source tree is created. Read-in fields can be decoded with the function decode Base64().
(13) Use replacement values on empty result set: If this option is selected, data will be supplied to the profile even if the SQL query does not generate a result set. The specified replacement values will then be transferred to the profile. The data must be in CSV format (delimiter character comma).
Blanks in query result (MSSQL/Informix)
In MSSQL and Informix databases, the result values of SQL calls are trimmed of preceding and following spaces by default. The "skipTrimResultValues" parameter can be used in the database alias configuration file (DatabaseService) to change this behaviour.