Kafka (Input Agent)
Introduction: You can find a description of this phase in section Phase 1 (Introduction).
Note: See also section File Names, File Patterns, Paths, System Constants and Variables.
(1) Only relevant for the add-on module Load Balancing to start the profile on a specific node. If checkbox Can be triggered on any server i s not set and the checkbox Profile may only run in one instance is also not set and no value is set in field Start on IS only, a triggered cronjob in a load balancing system is forced to remain on the Working Node on which it was triggered. See also section Settings for Profiles (Load Balancing).
(2) The Kafka alias. See section Kafka Connections.
(3) The topic from which messages are to be received.
(4) An optional identifier of a Kafka consumer (in a consumer group) that is passed to a Kafka broker with every request.
(5) If set, an email is sent when a tombstone is received, otherwise it is skipped and ignored. A tombstone always has a zero payload, so it must not be processed, otherwise the mapping will generate an error during parsing.
(6) The data type of the key of the message. Important note: The data type must always be specified. Make sure that you always use matching types when sending and receiving. If, for example, a message is defined and sent as Byte/String and then read as Integer/Byte, this leads to an error and the message cannot be read. Lobster_data as a consumer is blocked until someone removes this erroneous message from the broker and it does not process any messages from this topic!
(7) The data type of the message. Important note: The data type must always be specified. Make sure that you always use matching types when sending and receiving. If, for example, a message is defined and sent as Byte/String and then read as Integer/Byte, this leads to an error and the message cannot be read. Lobster_data as a consumer is blocked until someone removes this erroneous message from the broker and it does not process any messages from this topic! Important note: If the data type AVRO is used, the address to the schema registry must be specified in (11). Name: schema.registry.url, Value: http://address:port
(8) UTC timestamp. This can be used to trigger a new pickup of all messages that are younger than the specified time. The parameter is automatically reset after retrieval.
(9) If set, each record is committed (asynchronously). If not set, you can specify after how many records the commitment takes place.
(10) With this option it is possible to attach statically to selected partitions. Rebalancing is explicitly not considered.
(11) Additional consumer properties can be defined via the context menu. Note: The property group.id , for example, defines the consumer group and is mandatory for 'subscribe'. Lobster_data creates this property per consumer as grp + <hash code of topic> if it is not explicitly specified.
(12) and (13) Messages are collected in a buffer. The profile is started with the collected messages when either the maximum number of collected messages is reached or the maximum waiting time. If the profile is saved, messages in the buffer are deleted. Note: The desired seconds can also be entered manually.
(14) When messages are collected, you can decide here what should be returned. If All is selected, the individual messages may have to be connected with a delimiter character.
(15) If conditions are set and met (for functions, the function chain must return true), premature forwarding takes place.