Caplin Platform: upgrading from 6.2 to 7.x

Hardware requirements and software dependencies

For Caplin Platform 7’s hardware requirements and software dependencies, see Caplin Platform System Requirements.

Caplin Platform 7 is backward compatible with StreamLink 6.2 clients.

Upgrade clients to StreamLink 7 to take advantage of the following new features in Caplin Platform 7:

  • Re-ordering of watchlists

  • Repeatable fields in contribs

  • Cancelling triggers in Transformer’s Alerts Service module

Compatibility with DataSource 6.2 peers

Caplin Platform 7 components are compatible with Caplin Platform 6.2 DataSource peers, subject to the following caveats:

  • Liberator 7 servers cannot be clustered with Liberator 6.2 servers.

  • Transformer 7 servers cannot be clustered with Transformer 6.2 servers.

  • Do not bind the DataSource server socket of Liberator 7 or Transformer 7 to an IPv6 address.

    On dual-stack networks, you can inadvertently bind a DataSource 7 component to an IPv6 address if you assign a hostname to datasrc-interface. To avoid connectivity issues, see DataSource > IPv6 Networking.

Compatibility with the Deployment Framework 6.2

All servers in your Caplin stack must use the same version of the Deployment Framework. When upgrading to Caplin Platform 7, all your servers must be upgraded to the Deployment Framework 7.

The Deployment Framework 7 is compatible with:

  • Liberator 7

  • Transformer 7

  • Adapters written using DataSource 7.0 libraries

  • Adapters written using DataSource 6.2 libraries (to avoid connectivity issues with Caplin Platform 7 components, see DataSource: IPv6 networking)

Liberator 7, Transformer 7, and adapters written using DataSource 7 libraries cannot be deployed on the Deployment Framework 6.2.

Deployment Framework compatibility matrix
Deployment Framework 6.2 Deployment Framework 7

Liberator 6.2

Transformer 6.2

Adapters (DataSource 6.2)

Liberator 7

Transformer 7

Adapters (DataSource 7)

Compatibility with Caplin FX Suite

The minimum version of Caplin FX Sales that is compatible with Transformer 7 is Caplin FX Sales 1.11.

Upgrading to the Deployment Framework 7

It is not possible to perform an in-place upgrade of the Deployment Framework from version 6.2 to version 7. Instead, install the Deployment Framework 7 in parallel, and then migrate data and configuration overrides from your Caplin Platform 6.2 installation.

To migrate from version 6.2 to version 7.0 of the Deployment Framework, follow the instructions below:

  1. Install the Deployment Framework 7 in parallel to your existing installation.

  2. Deploy core components to the Deployment Framework 7:

  3. Manually merge configuration overrides from your Deployment Framework 6.2 installation to your Deployment Framework 7.0 installation.

    Do not overwrite Caplin Platform 7 override files with Caplin Platform 6.2 override files. Port the 6.2 configuration directives to the 7.0 configuration files manually.
  4. Perform a test migration of data persisted by Transformer 6.2.

  5. Test your Deployment Framework 7.0 installation.

  6. To cut over to the Deployment Framework 7.0:

    1. Shutdown both your 6.2 and 7.0 installations.

    2. Migrate data persisted by Transformer 6.2 (tested in step 4 above).

    3. Start the Deployment Framework 7.0.

Summary of changes in Liberator 7

Summary of important changes in Liberator 7:

  • Liberator 7 supports IPv6.

    On dual-stack networks, hostnames in Caplin Platform configuration files that previously resolved to an IPv4 address may now resolve to an IPv6 address. To avoid connectivity issues in dual-stack (IPv4 and IPv6) networks, see DataSource: IPv6 Networking.

  • Liberator 7’s Auth API is not compatible with auth modules written for Liberator 6.2.

    If your deployment uses a custom Liberator auth module, you must migrate the module to the Liberator 7 Auth API.

    If your deployment uses Caplin’s Permissioning Auth Module (PAM), you must deploy the Caplin Permissioning Service 7.

  • Liberator 7.1 introduces a microsecond timestamp format for packet logs and event logs.

    • Liberator 7.1 packet logs can only be read with logcat 7.1 or greater.

    • If you use a log analysis tool for Liberator 7.1 log files, you may need to reconfigure the tool to expect microsecond timestamps in Liberator event logs.

    • Liberator 7.1 expects the initial timestamp in a latency chain to be a UNIX timestamp (seconds since 1 Jan 1970) with a nanosecond fractional component.

Upgrading to Liberator 7

To upgrade to Liberator 7, follow the steps below:

  1. Install the Deployment Framework 7. See Installing the Deployment Framework.

  2. Deploy Liberator 7 to the new Deployment Framework. See Installing Liberator.

Summary of changes in Transformer 7

Summary of important changes in Transformer 7:

  • Transformer 7 supports IPv6. Hostnames in configuration files that previously resolved to IPv4 addresses may now resolve to IPv6 addresses. For information on how to avoid connectivity issues, see DataSource: IPv6 Networking.

  • Transformer 7 includes a new, redesigned Java Transformer Module (JTM) API (com.caplin.jtm). The original (now legacy) JTM API (com.caplin.transformer.module) is now deprecated and has an end-of-life scheduled for one year from the release of Transformer 7.0. For details, see Java Transformer Module (JTM) API below.

  • Transformer 7’s Persistence Service has been redesigned. You will need to create new database tables and migrate previously persisted data to the new tables. For details, see Persistence Service below.

  • Transformer 7.1 introduces a microsecond timestamp format for packet logs and event logs.

    • Transformer 7.1 packet logs can only be read with logcat 7.1 or greater.

    • If you use a log analysis tool for Transformer 7.1 log files, you may need to reconfigure the tool to recognise microsecond timestamps in Transformer event logs.

    • Transformer 7.1 expects the initial timestamp in a latency chain to be a UNIX timestamp (seconds since 1 Jan 1970) with a nanosecond fractional component.

Upgrading to Transformer 7

To upgrade to Transformer 7, follow the steps below:

  1. Install the Deployment Framework 7. See Installing the Deployment Framework.

  2. Deploy Transformer 7 to the new Deployment Framework. See Installing Transformer.

  3. Copy Transformer 6.2’s memory file to Transformer 7. See Transformer memory file, below.

  4. If you have written your own Transformer modules, migrate each module to Transformer 7 using one of the approaches below:

    • Rewrite the module using the new JTM API; create one or more database tables to store the module’s persistence data; and migrate data previously persisted by the module to the new table(s). See Rewriting a Transformer module to use the new JTM.

    • Deploy the adapter; create the TF_LEGACY_PERSISTENCE database table required by Transformer 7 to support the legacy JTM API; and migrate data previously persisted by the module to the TF_LEGACY_PERSISTENCE table.

      The legacy JTM API is deprecated. You should rewrite your module using the new JTM API as soon as possible.
  5. If you use Caplin’s Charting Service module, follow the instructions in Transformer module: Charting Service.

  6. If you use Caplin’s Persistence Service Client module, follow the instructions in Persistence Service Client.

  7. If you use Caplin’s Watchlist Service module, follow the instructions in Transformer module: Watchlist Service.

  8. If you use Caplin’s Alerts Service module, follow the instructions in Transformer module: Alerts Service.

  9. If you use Caplin’s HighLow Service module, follow the instructions in Transformer module: High-Low Service.

  10. If you use Caplin’s TS1 Decoder Service module, follow the instructions in Transformer module: TS1 Decoder Service.

Persistence Service

The Persistence Service in Transformer 7 has been rewritten to improve query performance. The new service can be accessed via both the legacy JTM and the new JTM.

Summary of important changes:

  • C modules that use the Transformer 6.2 Persistence API are not compatible with Transformer 7. Rewrite affected modules to use the new Transformer Persistence API.

  • Lua pipeline scripts that use the Transformer 6.2 Persistence API are not compatible with Transformer 7. Rewrite affected LUA scripts to use the new Persistence Module.

  • Java modules that use the legacy JTM API’s persistence API will continue to be supported for as long as the legacy JTM API is supported (see Java Transformer Module (JTM) API, below), with the following caveats:

    • To implement the legacy JTM API, Transformer 7 requires a new database table: TF_LEGACY_PERSISTENCE. You must create this table manually.

    • The legacy JTM PersistenceChangeListener interface is supported by Transformer 7, but redundant. Any implementation of this interface will not receive events because Transformer 7 does not support synchronisation of file-based persistence stores between Transformer cluster nodes.

  • To persist data to MySQL or MariaDB in Transformer 7, use the MariaDB driver, not the MySQL driver. The MySQL driver does not support the full functionality of the JDBC ParameterMetaData interface, and generates an error when used with Transformer 7.

  • Version 7 of the following Caplin Transformer services have been rewritten to use the new JTM API: Persistence Service, Persistence Service Client, Alerts Service, Watchlist Service, and HighLow Service. During installation of these modules, you are required to create new database elements that store and manage their persistence data.

    Persistence tables and triggers required by Transformer 7 services
    Module Tables Triggers

    Persistence Service

    TF_LEGACY_PERSISTENCE

    Persistence Service Client

    TF_PERSISTENCE_RECORD

    Alerts Service

    TF_TRIGGER_PERSISTENCE, TF_NOTIFICATION_PERSISTENCE

    Watchlist Service

    TF_WATCHLIST_PERSISTENCE

    TF_WATCHLIST_PERSISTENCE_ADD_POSITION

    HighLow Service

    HIGH_LOW_TABLE

    For reference implementations of the above tables and triggers in SQLite DDL, see SQLite DDL scripts.

    If you want to retain data previously persisted in Transformer 6.2, you will need to migrate data from Transformer 6.2’s single persistence table to the multiple tables listed above.

  • File-based persistence:

    • File-based persistence is not supported in production and is provided for ease of setting up a local development environment and as a reference implementation for database administrators.

      File-based persistence is not supported in production.
    • File-based persistence has been redesigned in Transformer 7 to use SQLite.

    • Synchronisation of local persistence files between nodes in a Transformer Cluster is not implemented in Transformer 7.

  • The new JTM API supports more sophisticated database schemas:

    • Multiple tables are supported, and developers must specify a table name when persisting and reading data. See the methods in the new JTM API’s Persistence interface.

      To improve query performance, developers are encouraged to use separate tables for each module they create.

      A database administrator must create the persistence tables required by a Transformer module.

    • Composite (multi-column) primary keys are supported.

    • Multiple columns can be persisted per key.

  • The options table-name and columns for configuration item add-database-params are not used by Transformer 7. If you include these options in Transformer 7’s configuration, Transformer will raise an error.

Java Transformer Module (JTM) API

Transformer 7 includes a new, redesigned JTM API: com.caplin.jtm. It is closely modelled on the design of the DataSource for Java API and is incompatible with the previous (now legacy) JTM API.

The legacy JTM API, com.caplin.transformer.module, is still included with Transformer 7 but is now deprecated. Support for the legacy JTM API will end one year from the first release of Transformer 7, after which time the legacy JTM API may no longer be included in future releases of Transformer.

While the legacy JTM API continues to be included in Transformer 7, modules written for the legacy JTM API will continue to work subject to the following caveats regarding Transformer’s Persistence Service:

  • The Persistence API in the legacy JTM API requires the TF_LEGACY_PERSISTENCE database table. You must create this table in your database server manually. For guidance, see the reference Persistence Service SQLite DDL.

  • If the legacy module uses a PersistenceChangeListener, the listener will not receive any events because Transformer 7 does not support synchronisation of file-based persistence stores in Transformer clusters.

For guidance on rewriting your custom Transformer modules to use the new JTM, see the links below:

Rewriting a Transformer module to use the new JTM

To rewrite a custom Java module to use the new JTM, follow the steps below:

  1. If your module uses the JTM’s persistence API, design and create one or more database tables to store the module’s persisted data.

    The table design will vary depending on the needs of your module. For examples, see the tables documented in SQLite reference implementation. For convenience, the table for the Persistence Service Client module is reproduced below:

    Example: the reference SQLite table for the Persistence Service Client module
    CREATE TABLE IF NOT EXISTS TF_RECORD_PERSISTENCE (
        PERSISTENCE_USER VARCHAR(250) NOT NULL,
        PERSISTENCE_ID VARCHAR(100) NOT NULL,
        PERSISTENCE_DATA TEXT NOT NULL,
        PRIMARY KEY (PERSISTENCE_USER, PERSISTENCE_ID)
    );
  2. Rewrite the Java module to use the new JTM API. For guidance, see the JavaDoc for the new Java Transformer Module SDK and the Java Transformer Module Project Template.

  3. Write a script to read data persisted by your module under Transformer 6.2 and write it to the module’s new database tables.

Transformer memory file

Transformer’s object cache is persisted to disk on shut down. You can optionally warm-start Transformer 7 with the object cache from your Transformer 6.2 installation. To do this, copy the memory file from Transformer 6.2 to Transformer 7. You will find memory file at the location defined by Transformer’s memory-file configuration item.

Persistence Service Client

The Persistence Service Client, included with Transformer, provides persistence services to StreamLink clients. Examples of dependent applications include:

To migrate Persistence Service Client data persisted under Transformer 6.2 to Transformer 7:

  1. Activate the Persistence Service Client blade in Transformer 7.

  2. In your database server, create the database table required by the Persistence Service Client module: TF_RECORD_PERSISTENCE. For guidance, see the reference Persistence Service Client SQLite DDL.

  3. Write a script to read the data persisted by the Persistence Service Client in Transformer 6.2 and write it to the new TF_RECORD_PERSISTENCE table. The serialisation format for persisted data in the Persistence Service Client 7 is the same as the serialisation format in the Persistence Service Client 6.2. No conversion is required.

    Caplin provide a migration tool that you can use as a starting point for your migration. See Persisted-data migration tool.

Transformer module: Watchlist Service

The Watchlist Service 6.2 is not compatible with Transformer 7. Deploy the Watchlist Service 7 instead.

To migrate Watchlist Service data persisted under Transformer 6.2 to Transformer 7:

  1. Deploy the Watchlist Service 7 blade to Transformer 7.

  2. In your database server, create the database table and database trigger required by the Watchlist Service 7. For guidance, see the reference Watchlist Service SQLite DDL.

  3. Write a script to read the data persisted by the Watchlist Service 6.2 and write the data to the new database table for the Watchlist Service 7.0.

    Although the Watchlist Service 7 uses a new database table design, the serialisation format for watchlist data has not changed in the Watchlist Service 7.0.

    The values in the WATCHLIST_POSITION table column must form a contiguous sequence of integers (1–n). If this requirement is not met, the Watchlist Service 7 will not operate correctly.
    Caplin provide a migration tool that you can use as a starting point for your migration. See Persisted-data migration tool, below.

Transformer module: Alerts Service

The Alerts Service 6.2 is not compatible with Transformer 7. Deploy the Alerts Service 7 instead.

To migrate Alerts Service data persisted under Transformer 6.2 to Transformer 7:

  1. Deploy the Alerts Service 7 blade to Transformer 7.

  2. In your database server, create the two database tables required by the Alerts Service 7. For guidance, see the reference Alerts Service SQLite DDL.

  3. Write a script to read the data persisted under the Alerts Service 6.2 and write the data to the new database tables for the Alerts Service 7.0.

    The serialisation format for triggers and notifications in the Alerts Service 7 is different from the serialisation format in the Alerts Service 6.2. You will need to convert serialised triggers and notifications to the new serialisation format before writing them to the new database tables for the Alerts Service 7.0.

    Caplin provide a migration tool that you can use as a starting point for your migration. See Persisted-data migration tool, below.

Transformer module: Refiner

Refiner 6.2 uses the now deprecated JTM API, com.caplin.transformer.module. Deploy the Refiner 7 module instead.

The Refiner module does not persist data. No data migration is required.

The Java configuration item add-javaclass for Refiner 7 has been relocated to Refiner 7’s overrides. If you have added new classes to Refiner 6.2’s classpath, this configuration now belongs in Refiner 7’s java.conf override file.

Transformer module: High-Low Service

The HighLow Service 6.2 is not compatible with Transformer 7. Deploy the HighLow Service 7.0 instead.

To migrate High-Low Service data persisted under Transformer 6.2 to Transformer 7:

  1. Deploy the High-Low Service 7.0 blade to Transformer 7.

  2. In your database server, create the database table required by the HighLow Service 7.0. For guidance, see the reference High-Low Service SQLite DDL.

  3. Write a script to read data persisted under the High-Low Service 6.2 and write the data to the High-Low Service 7.0’s database table.

    Caplin provide a migration tool that you can use as a starting point for your migration. See Persisted-data migration tool, below.

Transformer module: Charting Service

To migrate Charting Service data persisted under Transformer 6.2 to Transformer 7:

  1. Deploy the Charting Service 6.2 to Transformer 7

  2. Copy the contents of the charts directory under the Deployment Framework 6.2 to the Deployment Framework 7

    The name of directory to which charting data is persisted is defined by the configuration item cache-directory.

Transformer module: TS1 Decoder Service

This section applies to you if you are upgrading an existing installation of the TS1 Decoder Service 6. If you are deploying TS1 Decoder Service 7 to a new installation of the Deployment Framework then this section does not apply to you.

The directory name of Caplin’s TS1 Decoder Service changed in version 7 from TS1DecoderService to CPB_TS1DecoderService. As a result, the existing TS1 Decoder Service must be manually erased before installing TS1 Decoder Service 7. The steps below include steps to preserve your existing TS1 Decoder Service’s configuration.

Follow the steps below to upgrade an existing installation of TS1 Decoder Service:

  1. Backup the TS1 Decoder’s configuration overrides directory: <framework_root>/global_config/overrides/TS1DecoderService

  2. In the root directory of your Deployment Framework, run the command below to erase the existing TS1 Decoder Service:

    ./dfw erase TS1DecoderService
  3. Copy the new TS1 Decoder Service kit to your Deployment Framework’s kits directory.

  4. In the root directory of your Deployment Framework, run the command below to deploy the new TS1 Decoder Service:

    ./dfw deploy
  5. Restore your backup of the TS1 Decoder’s configuration overrides directory.

Persisted-data migration tool

Caplin provide a persisted-data migration tool to migrate data persisted by Caplin-supplied modules from the single database table of the Persistence Service 6.2 to the multiple-table schema of the Persistence Service 7.

The data migration tool is not a supported product. The source code for the tool is supplied 'as is', and you are free to modify it and use it as you see fit. Caplin are not responsible for any loss or damage arising directly or indirectly from the use of this tool. As with any data-migration operation, backup your data first.

To obtain a copy of the source code, see Caplin’s persistence-upgrade repository on GitHub.