You can configure Liberator to batch messages together before sending them to a client. This feature is called bursting, because it's designed to smooth peaks in data rates.
This diagram shows how bursting works:
The burst period is defined through the burst-max configuration setting; 0.5 seconds in the diagram. When Liberator receives a message M1 from another DataSource application (at time zero in the diagram) it buffers it and starts the burst period timer. Subsequent messages M2, M3 and M4 are received during the burst timer period, so Liberator buffers them too. At 0.5 seconds the burst timer expires. Client1 is interested in all the buffered messages, so Liberator batches them into one single message and sends them to the client.
After another 0.2 seconds the DataSource application sends message M5, so Liberator starts the burst timer again and buffers the message, along with message M6 which arrives 0.1 seconds later. No more messages are received until the burst timer expires again, 0.5 seconds after the receipt of M5. Liberator batches up M5 and M6 and sends them to Client 1.
Messages that arrive at longer intervals than the burst interval are not buffered, as this would delay them unacceptably. Message M7 illustrates this. M7 arrives 0.7 seconds after the burst timer last expired (that is, after a longer interval than the configured burst period), so it's sent on to the client immediately. The burst timer is started at the same time, so that if a subsequent message M8 (not shown) is received within the burst period, it will be buffered and then forwarded 0.5 seconds after M7.
Bursting can increase the scalability of Liberator as it makes better use of the network between Liberator and clients (fewer individual messages are sent), and it also results in less CPU usage within the Liberator server hardware. There is a trade off though: pausing to batch updates together adds latency to the messages, but if you set the burst interval correctly this doesn't effect the end-user experience.