Today we have deployed the world’s largest sensor network on behalf of many video on demand publishers (SVOD and AVOD) and TV broadcasters to measure quality of experience (QoE) directly from the consumer in-screen experience as well as very detailed engagement data in a continuous, census-based approach. Most legacy-based approaches were either discrete event-based web counting or statistical sampling-based audience surveys. Conviva has invented a revolutionary new approach that is true viewer-level continuous measurement. It was purpose-built for video, and the sensor technology computes consistent and accurate metrics over a diverse set of over-the-top (OTT) devices and application software frameworks.
The heart of the Video AI Architecture is the Conviva Video AI Platform. It ingests the heartbeats from the sensor with a real-time processor that leverages of very unique and proprietary Video Graph. The platform also has secure containers where you can route appropriate data for internal and external use. Finally, Conviva has developed and continues to evolve a set of video-specific AI models that are the foundation of its analytics products.
The processor is a multi-step pipeline that ingests each heartbeat in real time, cleanses the data and then enriches it with metadata and video-specific context from the Video Graph. The heartbeat router is fully customizable and has flexible policy control capability so it can route the appropriate stream data and metadata to each internal and external constituent. The router supports full stream replication, splitting and transformation as well.
In essence, each raw heartbeat or video data event is transformed into a fully formed video stream with rich metadata and contextual awareness that powers analytical models.
The OTT Video Graph is derived from the analysis of each stream in the processor and video AI models. The graph captures the relationships between entities in both the content and infrastructure of each stream. The processor and AI models can then leverage these relationships to very quickly track down exactly what entity in the hierarchy might be responsible for problems in service delivery. It can also be used for making content recommendations or understanding device usage by application, channel, or show type.
The video AI models are machine learning algorithms that use massive amounts of data flowing through the platform from Conviva’s global customer base (AVOD publishers, SVOD publishers, and pay TV operators) to get smarter. In essence, they evolve based on the amount of data they have access to – thereby any customer benefits from what is known as transfer learning in artificial intelligence (AI). There are three classes of video AI models in Conviva’s system. They can correlate QoE with engagement and predict churn on those correlations. They are also the foundation of Video AI Alerts, which detect service anomalies across all QoE metrics while also diagnosing the root cause of the by leveraging Conviva’s Video Graph.