Whether you’re a brand, agency, advertiser, streaming platform, or publisher, in today’s digital world, you need to understand streaming. We put together a guide to all things streaming metrics to help you not only understand the landscape, but also gain a competitive advantage.
Today we’ll focus on the metrics that can help drive business decisions to increase viewer engagement, reduce subscriber churn, and elevate the customer experience, the first in a series on streaming metrics that matter.
Understanding total performance
While many of these metrics can be measured for an individual show, viewer, or device we’ll focus on the metrics that can keep both business and technical teams informed on the overall performance of content and quality of experience (QoE) when measured across an entire streaming service.
Attempts counts all attempts to play a video which are initiated when a viewer clicks play or a video auto-plays. An attempt can result in a success play, or an early termination due to a video start failure (VSF) or an exit before video start (EBVS). It’s important to know attempts for two reasons. First, you want to know how many plays or auto-plays your content received, and you also want to know how many of those were successful. By comparing attempts with successful video starts, TechOps can quickly identify specific content or ads, which through either encoding problems, player problems, or ad errors, can diagnose the root cause resulting in a drop off in engagement at the beginning of a viewer session.
A play is tallied when the viewer sees the first frame of video and does not include unsuccessful attempts to play. This metric shows the number of plays that started during a selected time period. As a percentage, this metric shows the percentage of attempts that resulted in plays in the selected interval.
Active Streaming Plays
Active plays is the number of sessions that started playing successfully and are not closed within a given time interval. Active plays is a key metric which, among many use cases, can be utilized to measure the success and engagement of live events in real-time. This metric can help TechOps and engineering teams conduct capacity planning ahead of future events as well as drive content decisions around the shows that drove the highest viewer engagement.
Concurrent Streaming Plays
Concurrent plays is the maximum number of simultaneously active sessions during a given interval. Monitoring concurrent plays is important to help you define your audience, determine video quality, and track which customers are actively engaged with a video. Concurrent plays is particularly helpful when planning network capacity for large-scale live events such as sporting events, premieres, etc.
Total Minutes Streamed
Total minutes is the combined total viewing time across all devices and for all sessions that ended during the selected time range. Total minutes is a key performance indicator measured daily, weekly, monthly, or any other specified time range and is a clear indicator of the overall growth of your streaming service.
The unique devices metric counts the total number of devices that viewed at least one frame of video during a selected interval. Looking at these metrics helps to identify the devices viewers use most frequently to consume your content and can be used to prioritize application development on the devices seeing the most traffic.
The minutes per unique device metric is calculated by dividing the total played minutes in a selected time range by number of unique devices in that time range. Knowing which device a viewer is using and how engaged they are with your service on that device can prove invaluable when it comes to troubleshooting issues with customer care.
Average percent completion
Average percent completion shows the duration viewed compared with the total length the content. A high percentage indicates a high level of viewer engagement with the asset, channel, and service. This metric is one of the most valuable for content teams to understand with depth and granularity the way in which viewers engage with content. It helps answer the big question of, “Is this content interesting and relevant for my viewers?” and can play a major role in the decisions which drive content production.
Video start failures (VSF) — business and technical
VSF measures how often attempts terminated during video startup before the first video frame was played and a fatal error was reported. If your business is working towards enforcing shared subscriber accounts, the VSF-Technical error lets you segment out errors related to that enforcement, so your TechOps team isn’t chasing problems that aren’t actually the result of network or stream issues.
Streaming Performance Index (SPI)
SPI is an aggregate score that looks at benchmark thresholds considered “good” or “bad”—grading the overall quality of every stream for a number of quality of experience metrics including video start failure (VSF), exits before video start (EBVS), rebuffering, video playback failures (VPF), video startup time (VST), and picture quality. SPI can be broken down by different dimensions such as device and country to better understand the experience across audience segments and endpoints. In addition to providing a score, benchmarks contextualize the score so that you can quantify how your performance ranks compared to your industry peers.
Measuring quality of experience
It’s not hard to surmise that a poor viewer experience impacts viewer engagement and therefore churn and ad dollars. These metrics, either on their own or correlated against other metrics, will instantly identify areas where your stream is underperforming and resulting in a not-so-great viewer experience.
An ended play is a play that ended during a specific period. There could be three reasons why a session ends; gracefully ended with or without content completion, ended due to failure, and ended to due timeout. Other metrics, such as VSF, will help you determine if ended plays are due to quality issues or not. Ended plays can be combined with other QoE metrics to determine why a viewer stopped watching. It’s totally normal for someone to end a show on their own accord, but when an ended play was the result of a failure in the stream, that needs diagnosing to ensure similar viewers aren’t having the same problem.
Ended plays/unique devices
This metric is calculated by dividing total number of ended plays in a selected time range by number of unique devices in that time range. An increasing or higher number indicates that viewers watched more videos on the same device and your content likely experienced higher engagement. Understand the correlation between viewer engagement and devices to better optimize player performance, identify viewers device preference, and know the screens your customers are most likely to engage with your content on.
Video startup time (VST)
Video startup time is the number of seconds between when the user clicks play or video auto-starts and when the first frame of a video is rendered, excluding any pre-roll ads. Any ad startup time and ad playback are not counted toward VST. If content is pre-fetched in the background while an ad is playing, this buffering is not counted toward VST because it is not perceived by the user.
Video restart time (VRT)
Video restart time is the number of seconds after user-initiated seeking until video begins playing. Seek occurs when a viewer scrubs the play bar, fast forwards, or rewinds the video. You can use the VRT metric to monitor unnecessary delays in video access after user-initiated seeks. These delays often lead to session abandonment.
Rebuffering occurs when the video stalls during playback and the viewer must wait for the video to resume playing. Frequent rebuffering is a major source of poor quality of experience and often leads to audience abandonment. A high rebuffering ratio or a significant increase in this metric can be an indication of declining overall network performance and the likelihood of viewer abandonment. Conviva measures rebuffing ratio on all dimensions, so you can understand example which piece of your delivery puzzle is impacting the greatest number of viewers. For customer–obsessed teams, this metric can be a great universal indicator for how “good” the viewer experience is. Understanding impacted viewers of a rebuffering issue means customer care teams can provide nurture campaigns to viewers subject to a poor experience. This metric can also be used to diagnose and troubleshoot parts of the delivery network as the root cause of high rebuffering.
Video playback failures (VPF) — business and technical
Video playback failure occurs when video play terminates due to a playback error, such as video file corruption, insufficient streaming resources, or a sudden interruption in the video stream. VPFs are an important measurement of service quality and audience engagement, especially when a large percentage of plays terminate due to VPF. Conviva, for example, has additional metrics to track video playback failures due to business logic, such as user account limits or technical playback issues.
Exits before video start (EBVS)
EBVS measures the attempts that terminated before the video started—without a reported fatal error. If a fatal error is reported when the video terminates before starting, the video termination is counted as a VSF instead of EBVS. The most common reason for EBVS is that content isn’t loading fast enough. There are many reasons that could cause this problem and drilling down from EBVS is an extremely efficient way for TechOps teams to diagnose one of the most frustrating viewer experience pitfalls in streaming media—a black screen.
Average bitrate calculates the bits played by the player. The bits played do not include bits in buffering or bits passed during paused video. Typically, the higher the bitrate, the better the image quality and viewer experience. By looking at bitrate across content, devices, player software, CDNs, and more, TechOps teams can pinpoint the places causing poor bitrate and plan for optimizing those areas. For example, identifying a specific player device with drastically worse average bitrate could be an indicator to product teams that a specific device or app software version may need optimization.
Average frame rate
Average frame rate measures the average number of decoded frames, in frames per second (fps), played by the player. Typically, the higher the frame rate the better the image quality and viewer experience. Underperforming frame rate can be a clear indicator of an encoding problem on media assets.
Rendering quality is the ratio (as a percent) of video rendering rate to encoding rate. For example, if the rendering rate is 20 frames/sec and the encoded rate is 24 frames/sec, the rendering quality is 83.3%. The rendering rate can differ from the encoded rate because the player will skip frames while a video is being screened if it can’t get them quickly enough. This metric can help you determine if your rendering rate and encoding rate are appropriate.
Bandwidth is the aggregate bandwidth across all videos for a selected time range. The bandwidth metric can be used to identify underperforming CDNs and ISPs across the overall delivery of your streaming service while also being a very valuable metric on the individual viewer level, which is crucial for customer care. For example, when attempting to diagnose whether a viewer performance issue is caused by an upstream provider versus the last mile.
Understanding exactly what your viewers are experiencing enables teams to understand viewer engagement as well as diagnose and optimize stream performance to ensure a flawless experience for everyone, on every screen, every time. In our next blog, we’ll explore the metrics that matter to marketers.