DefaultFileSQLCron

Configuration

Is configured in the Business Connector (Input Agent cron).

Class name

com.ebd.hub.datawizard.util.DefaultFileSQLCron

Description


This class works analogous to DefaultSQLCron, with the following differences.


  • The result data cannot be dumped into a CSV file.

  • The result data is read from the database by a cursor.

  • The result data is temporarily written into a CSV file. Only after the last dataset is read, the processing of the data starts.


This allows for the processing of vast amounts of data.

Important note: Since it is not easy to create the configuration file manually the first time, this can be done with the SQL Configuration Wizard.

Note: A source structure for the file created by the DefaultFileSQLCron can automatically be created in the menu of the source structure.

Example


Important note: Suppose we have a string variable MSG_CALL_MYVAR with content 'USA', 'UK'. If you now use the definition MSG_CALL_MYVAR=100 in the configuration file shown below, the SQL query SELECT id, name FROM supplier WHERE country IN (@100:s@) would not work. The reason is that this is not resolved into SELECT id, name FROM supplier WHERE country IN ('USA','UK'), but into SELECT id, name FROM supplier WHERE country IN (''USA','UK'') since the placeholder represents a string. Unfortunately, such queries can only be solved via dependencies, as shown below. The SQL Configuration Wizard will help you do this. Important note: The placeholder number for variables must not be smaller than 100. Note: You can use permanent profile values (syntax %perm:KEYNAME%).

sample_sql_cron.properties
#
#sample config file for DefaultSQLCron
#
#which db connection to use
db_alias=hub
#you can define multiple pre- and post-statements here (separated by ";"), like:
#pre.statement=lock table table_a in exclusive mode;lock table_b in exclusive mode
#post.statement=
#create csv file for debugging - uncomment if needed
#save_as_csv=/tmp/sqldump.csv
#for each row of select.0 create a new request (job) within datawizard if param is set to true
result.split=false
#if blob columns are selected, you may want to encode the content by Base64
#encode_blob=true
#
#here are the statements... must start with 0, step size is 1!
#
select.0=select 'A',id,sender,receiver,version from mp_header where date_at > @0:d@
select.1=select 'B',cp,de,co,ac from mp_data where id = @1:i@
select.2=select 'C',cp from mp_data_sub1 where id = @1:i@
#
#select.1 needs value (column at position 2) from statement 0 (select.0) for execution (join); multiple columns are separated by ','
#select.2 needs value (column at position 2) from statement 0 (select.0) for execution (join); multiple columns are separated by ','
#Offset is always 1 for counting column positions. First dependency references to @1:xxx@, etc.
#Noted: to use the time of the last run when Cron job was executed, use tag @0:d@ or @0:t@ for date or timestamp
#
dependencies.1=0:2
dependencies.2=0:2
#
#(Passed) variables can be used as well
MSG_CALL_VAR_MY_VARIABLE=100
#This will replace filler @100:i@ in any SQL select statement by the integer value of the named variable
#
#
#use limit.xxx to limit the result set, e.g.
#limit.0=500 will limit select.0 up to 500 rows. If not set, no limit is given
#
#if an SQL statement finds no values, you can define a dummy record instead
#define a valid CSV line where delimiter is ;
#example for 'dummy' record for select.0
#empty.0=A;0;Nothing;No one;1.0

Blanks in query result (MSSQL/Informix)


In MSSQL and Informix databases, the result values of SQL calls are trimmed of preceding and following spaces by default. The "skipTrimResultValues" parameter can be used in the database alias configuration file to change this behaviour.