Prioritising profiles (performance)

Under certain circumstances, it makes sense to execute a profile with priority, for example, because really time-critical data is to be processed. Let’s suppose that you receive a notification that a particular customer must be suspended in the database immediately and that no more orders should be accepted from them. You would, of course, use event-driven handling for messages of this kind, e.g. via the FTP or SMTP Input Agent instead of a time-driven Input Agent. This would normally ensure that it is processed immediately.

However, sometimes there may be so much to do that all of the profile processing threads are occupied. Every new job is therefore placed in a queue and processed sequentially. As soon as a profile run is finished, the next job in the queue is started. And now, of all times, the queue is extremely long. The notification that a customer should be suspended might end up as number 123. If the queue also includes some really large tasks, things could take a while. In these cases, however, you have the option to change the priority of the profile. Of course, we should not need to tell you this, but be sure to use this option as sparingly as possible - only where it really matters, Otherwise, the entire concept will soon be nonsensical.

Another possible use would be if a profile receives data via FTP, but the filename is unfortunately always the same. A combination of several circumstances may lead to undesirable effects in this case.

  • The external system uploads the file xyz.txt via FTP.

  • At that moment, your system is busy and the queue contains several jobs. This means that the order ends up at the back of the queue.

  • A few seconds later new data arrives, also with the name xyz.txt. Unfortunately, the first file has not yet been processed.


Here is what could happen next.


  • The FTP service receives the first file, saves it and sends a notification that the file xyz.txt (in the respective directory) needs to be processed.

  • A job is generated and placed it in the queue.

  • The FTP service receives the second file a few seconds later and saves it under the same name, overwriting the first. Once again a job is generated

  • Finally, the first FTP job is processed, the file is read and deleted and the job is run. Unfortunately, it picks up the file content from the second transfer, as the first file has been overwritten.

  • The second FTP job is processed, and the file can no longer be found.


In extreme cases, both jobs might manage to read the file and process the same data. Either way, the data from the first transfer is lost. In theory, you should ensure that data uploads do not always use the same filename. Or at least that different data is not sent under the same name within a few seconds. Or that a check is run before a new upload to make sure that the previous data is gone, instead of simply overwriting it. But if this simply cannot be done, you can prioritise your profile to at least reduce the likelihood of such an event.