The basis: Packet latency measurement
The core idea is to have measurements carried out under real-world traffic conditions for interactive applications and observe the resulting latencies for a specific period. Data latencies are not constant or determined merely by physical distance.
Realistic traffic conditions require channels and networks that act as they would for real-world applications. The network does not have a fixed setup and transmits data based on the capacity provided. The network dynamically adjusts to demand when resources are added, released and shared with other clients in the same cell. Resource allocation differs based on the amount of transported data, which in turn influences transport latency.
Mobile network channels, setups and available resources are rapidly changing along with latency. But latency is not constant and can increase rapidly for a short period of time, creating a challenge for real-time, interactive applications. Finally, latencies are not analogue values but a set of individual latencies for each transmitted packet and the transport time of each packet also depends on packet size and transmission frequency. Also, packet size and frequency have to be the same as they are in real-world applications.
Recommendation ITU-T G.1051 defines guidelines for creating realistic data streams and measuring packet latency. One well-defined option for creating packet streams is the two-way active measurement protocol (TWAMP) in line with IETF RFC 5357, which in turn is based on the user datagram protocol (UDP) for most real-time network communications. Packet size and frequency (i.e. data rates) can be defined as for a real application. The measurement approach allows to determine the latency for each individual packet. The TWAMP method changes packet sizes during data stream transmissions and is incorporated into a wider scope to emulate data streams with varying data throughput.
This approach can be scaled to simulate data streams with the same characteristics as a real interactive application and provide hundreds or thousands of per packet latency readings instead just a few sample latencies. The simulated traffic can reflect typical patterns for cloud gaming or remote meetings. It can match packet size and frequency but also the statistical distribution of data rates in short sections.
ITU-T G.1051 has guidelines for shaping these data patterns and defines statistical KPIs from the individual per packet latency as higher-order latencies, packet delay variations and packet loss statistics.
ITU-T G.1051 states that quality of service (QoS) must measure latency and interactivity. It also has guidelines for shaping traffic patterns and applying over-the-top quality of experience (QoE) models.
Over-the-top: An interactivity model
Annex A was added to the Recommendation ITU-T G.1051 and is a formal part of the standard. Annex A defines an over-the-top QoE model that quantifies the perceived interactivity for simulated cloud gaming or remote meeting applications. The QoE model uses KPIs defined in the core text of the recommendation and the model can be derived with the defined latency measurement method. This method can be scaled for individual application classes with simple parametrization. The QoE model applies a base latency in a connection, the packet delay variation (consistency, any latency changes or peaks) and packet loss.
Perceived interactivity is not constant for a given latency but varies from application to application. Latencies and latency variations can alter perceived interactivity performance in cloud gaming or remote meetings. Users may be very sensitive to longer delays in some applications, while lost packets or short-term distortion can cause problems in other ones.
The QoE model in Annex A considers, weights and aggregates these individual QoS variables by their importance to the modeled application. Parameters and weighting can be used to scale a model for a wide variety of applications. Annex A also has guidelines for deriving parameters to scale QoE models. Subjective tests or more generic QoS definitions (such as 3GPP ones for individual application classes) can be used to derive parameters.
Implementation: Scaling and applying model parameters
The formal sections of the Recommendation ITU-T G.1051 define useful KPIs, with guidelines and scalable, flexible model outlines to measure latency and predict interactivity.
Appendices to the formal recommendation list the best practices for using and parametrizing the models above. Appendix I includes a set of parameters for setting data rates to emulate sample applications, such as online gaming, HD video chats or drone controls and how to apply QoE model parameters in line with Annex A. The weights and thresholds are from generic 3GPP recommendations for certain applications and QoS classes for mobile networks. Appendix II includes a sample QoE parametrization in line with Annex A based on a subjective test, where the online game experience was assessed in a lab.
Emulating data flows – a deterministic way to achieve real-field data latency
The test methodology in Recommendation ITU-T G.1051 defines simulations for real interactive applications. It does not simulate an individual game or video chat application but a network load typical for them. Packet size and frequency (data rate) as well as the transport protocol and QoS classes are typical for real-world applications. The simulation can also reflect different data rates for different application phases, such as setup phases and phases with higher data rates, sustainable rates and trailing phases. The simulation emulates peak phases when content changes in an app.
The ITU standard defines the methods for creating and parameterizing traffic patterns and provides examples. The approach is also scalable and can be extended to many other applications. Applied generic testing lets the results be compared independent of the individual applications, arbitrary use and the server-side internet application location. The test approach enables efficient latency testing in loaded channels that reflects the radio network influence on latency. Even when using a normal network data server, the main latency variations can be seen in the radio and core networks, which is the focus of this test method. ITU-T approval for the standard illustrates the importance of real-world network latency testing when evaluating network readiness for interactive applications and quantifying predicted performance.
The ITU-T G.1051 is another example of Rohde & Schwarz testing competency, which covers a wide range of applications and is widely accepted in the market and standardization communities.
Read more about interactivity test in part 1 - 7 of this series and download our white paper "Interactivity Test"
Interactivity test: QoS/QoE measurements in 5G (part 1)
Interactivity test: Concept and KPIs (part 2)
Interactivity test: Examples from real 5G networks (part 3)
Interactivity test: Distance to server impact on latency and jitter (part 4)
Interactivity test: Impact of changing network conditions on latency and jitter (part 5)
Interactivity test: Packet delay variation and packet loss (part 6)
Interactivity test: Dependency of packet latency on data rate (part 7)
White paper "Interactivity test"