Low-Latency Web Streaming

There is a powerful web streaming server at the core of the Caplin Platform called Liberator. Web streaming allows data to be sent from the server to a web browser asynchronously and with low latency. There was no proper support for this capability in the early web; adding it makes full duplex client-server communication possible. Web streaming has had many different names, from reverse AJAX to Comet, and now a number of different techniques are available that make it possible to stream data to a wide range of web browsers. Between them, StreamLink and Liberator implement the most appropriate of these techniques, including the latest HTML5 WebSocket.

StreamLink hides the different implementations from developers; this allows you to concentrate on building the business logic of your web trading app, rather than worrying about implementing the communications technology.

Keeping Latency Low

The Caplin Platform pushes data between server components, and to and from client applications. The latency of this data delivery is critical in trading platforms, so keeping it to a minimum is a high priority for us.

The Platform uses Web Streaming to enable data to be sent from the server to a web browser asynchronously and with ultra low latency. As the system scales horizontally with load, it is highly performant and can handle hundreds of thousands of messages per second. And to make sure that we keep the message latency as low as possible, the Caplin Platform is regularly benchmarked. If your client application subscribes to a lot of data, or the data rate is just very high, for optimal performance, you can tweak the Platform configuration to conflate the updates, or batch the messages.

The Caplin Platform acts as a real time cache of the data provided by the Integration Adapters. Liberator holds an in-memory cache of the latest values of any data currently subscribed to. When a new client subscribes to the same data, the latest values of that data can be immediately returned from the cache to that client without needing any further interaction with the sources of the data. This can improve performance, and in some cases it can compensate for the system that supplies the data not having a current value cache of its own. Transformer can also cache any data that is routed through it. You can make use of this cache when using the Transformer API to implement custom behavior or services.

Dealing with High Data Rates

The Caplin Platform can handle very high data rates (hundreds of thousands of messages per second). If a client application subscribes to a lot of data, or the data rate is just very high, it’s a good idea to control the amount of data being sent to a client.

With the Caplin Platform you have the tools to either throttleor batch the data. Both throttling and batching add some latency to the messages, so it is important to configure these features to suit the profile of your end-users and how they use the web trading platform.

Throttling Data

When data is sent to the client, it is going to be the latest value of any of the fields that have changed. If fields change frequently by small amounts, then you can see that there are going to be a lot of updates necessary every time the data is received.

In this case, it is useful to be able to throttle the data flow. This simply means that not all of the updates are sent to the subscribed clients.

The Caplin Platform lets you specify a threshold for the update rate, above which the data can be throttled. Let’s look at an example of this in action:

  1. Let’s say a data item updates twice a second.

  2. You set the throttling threshold to one second.

  3. The Platform will send an update for that item to subscribed clients at most every second.

  4. This means that even though there are two updates to the item within a second, only the second one will be sent to the clients, at the end of the one second interval.

Batching Data

The other way you can control data flow is by batching messages together into larger packets containing multiple messages, rather than by sending each message individually.

Batching makes better use of the network and the resources on the server.