Logging in DataSource applications

All DataSource applications can log basic information about data that they handle and the destinations the data is sent to.

These logging capabilities are provided through the DataSource SDK that you build the application with. The DataSource API also allows you to define application-specific log files and write your own messages to them. For example, Caplin-supplied DataSource applications, such as Liberator and Transformer, use the API to log their own data in addition to the standard DataSource items.

Event log level

Each event that can be logged to a DataSource application’s Event log has a reporting level that indicates how important it is. (This is like the logging levels defined for Java in the java.util.logging.Level class.) The levels are, in order of increasing severity, DEBUG, INFO, WARN, NOTIFY, ERROR, CRIT.

You can control how much event detail is logged, by setting the DataSource applications log-level configuration item. For example, setting the log-level to WARN would ensure that all events with reporting level WARN, NOTIFY, ERROR and CRIT are logged, but INFO and DEBUG events wouldn’t be logged.

Managing the logs

You can define the names of the DataSource application’s various log files through configuration, and you can also manage the size of the individual log files by configuring log file cycling. Each log file is closed and renamed on a regular basis, and a new file is opened for writing – this process is called "cycling". You can set global settings to specify the cycling of all log files, or different cycling criteria for each type of log file, choosing from:

  • a maximum file size above which the log file is cycled.

  • a fixed time at which the log file is cycled.

  • a time interval after which the log file is cycled.

  • a combination of the above – the log file is cycled when any one of the criteria is met.

A DataSource application can potentially produce very large amounts of log data, depending on how much traffic it is handling, so it is important to manage the size of the individual log files by setting up suitable cycling criteria.

Here’s an example of how an event log can be configured to cycle every two hours, with the cycled log files being retained for seven days before they’re overwritten with new ones:

add-log
   name      event_log
   log-level INFO
   period    120
   suffix    .%u%H%M
end-log
  • period specifies that the event log is to be cycled every 120 minutes (that is, every 2 hours).

  • suffix causes the filename of each cycled log to be appended with:

    • %u a value from 1 to 7, representing Monday to Sunday

    • %H%M the time when the file was cycled in hours and minutes.

  • log-level specifies that events with severity level INFO and above are to be logged (so DEBUG events are not logged).

Turning off logging

You can turn off logging of a particular type of data by configuring the name of the log file as /dev/null for Linux machines (or nul on Windows).

For example:

Packet logs display all messages sent between Liberator and its data sources. To stop packet logs from being created on your Linux machine, set the configuration item datasrc-pkt-log:

datasrc-pkt-log /dev/null

Viewing log files

Most logs are simple text files that you can easily view using a suitable text display utility or text editor, such as one of the Linux commands cat, more, and vim. The DataSource packet logs are in binary format, but Caplin supplies a viewing utility called logcat that you use in the same way as the standard Linux cat command.


See also: