System crash and crashed jobs

When the Lobster Integration Server is stopped regularly, jobs that are still running at this time will be completed. The actual stop of the server is therefore delayed until these jobs are terminated. As soon as the stop has been triggered, however, no new jobs are started.

In certain cases, for example, if the target system does not respond in phase 6, a single profile may take a very long time. In this state, the Integration Server cannot be started again and the administrator must decide whether to force the stop (in several variants). See section Control Center → System → Settings .

The user is responsible that no relevant conversion processes are running when enforcing the stop, but only long-running or pending cron operations. Otherwise, a loss of the data currently being processed cannot be completely ruled out.

It is important to avoid starting the Integration Server as long as the old process is not completely terminated because this can lead to a non-consistent state of the system. See also section Configuration of shutdown behaviour In ./etc/startup.xml. Because network connections may stay alive on the system level for a few seconds after completing the process, a short wait before a new start is recommended.

Crashed jobs


See also section Retention periods of backup files, logs, crashed jobs.

If a job is not finished because of a forced stop, it is marked as 'crashed job'. If subsequently Lobster Integration is restarted, all crashed jobs are restarted as well. The processing will not be continued in the phase the job was stopped, but instead at the very beginning. If the force stop happened after phase 3, all function functions were already executed, e.g. the autonumber functions (autonumber(a), autonumber(a,b,c), autonumber-system-wide(a), autonumber-system-wide(a,b,c)) and database operations in functions. Therefore, when implementing the profiles, it is the responsibility of the developer/user to consider the possibility of a crash and to evaluate the possible impact of a re-execution on the process.

Crashed jobs can also occur because of a computer crash, hardware failure, power failure, etc. In this case, the crashed jobs will also be executed from the beginning after a restart.


How Lobster Integration processes crashed jobs:

  • As a prerequisite, parameter trackCrashedJobs must be set to true in configuration file ./etc/startup.xml.

  • Lobster Integration will then create a file in directory ./datawizard/backup/lock each time a job starts.

  • The created file is deleted again when a job is done. Regardless of whether it was successful or unsuccessful.

  • The name of the file is <job number>.lck and it can be read in a text editor. It contains the job number, the name of the input file and the ID (not the name) of the associated profile.

  • The file does not contain the original payload. As a result, the job's backup file must still be present to restart the job.

  • If there are crashed jobs, the backups still exist and the feature is activated, the jobs are restarted after an adjustable time (restoreWaitTime in ./etc/startup.xml).

  • The detail logs in the Control Center will contain a comment that this it is a restarted crashed job.

  • If the feature is not enabled, each crashed jobs will only cause an entry in the database table dw_log_sum to be at least visible to users.


In a production system, the frequency of such crashed jobs should be reduced to a minimum by stabilising the system environment.

See also sections Settings, Trigger Force Stop via HTTP and Configuration of shutdown behaviour In ./etc/startup.xml.

Preventing the restart of crashed jobs


To prevent a restart of all crashed jobs, the parameter startCrashedJobs in configuration file ./etc/startup.xml can be used.

If you want to prevent the restart of individual crashed jobs, you can search for files with name <job number>.lck in subdirectory lock of the backup directory and delete them selectively. The default value for the backup directory is ./datawizard/backup. See parameter backupDir in configuration file ./etc/startup.xml.

Parameter ignoreOldCrashedJobs allows you to prevent the restart of crashed jobs older than a specified number of days.