By Juan Martin, CTO at Quickplay

Can the power of Generative AI solve the content discoverability issues that plague streaming? After a sweep of major awards by our Gen AI-enabled Quickplay Media Companion at NAB 2024, it’s clear to us that the answer is a resounding “yes!”

Long a challenge for the industry, discoverability in streaming has gotten worse as content options have proliferated. Accenture finds that 72% of consumers “report frustration at finding something to watch,” up six percentage points from the previous year.

Quickplay has led the way in harnessing generative AI technology to enable streamers to overcome the technical barriers and media metadata challenges that have historically impeded viewer search and discovery. At NAB 2024, our Media Companion won the BAM Award in the Consume category from IABM, the international trade association for broadcast and media technology; a Future Best of Show Award; and an NAB Product of the Year award as one of the most cutting-edge advancements and technologies shaping the future of content creation, distribution and monetization.

We’ve been in our customers’ shoes, so we know the issues they face. They’re competing for audience eyeballs, trying to reduce churn, and trying to be more efficient about their content spends. All of that is compounded by overwhelmed search paradigms that rely too heavily on exact keyword matches, struggle to interpret the intent behind user queries, and don’t have access to or the tools to wrangle the extensive media metadata that could help organize content in a streamer’s library or via a user’s context.

We’re integrating large language models (LLMs) and AI marketplaces – most notably, Google Cloud’s Vertex AI – into our cloud-native, open-architected CMS to solve many of these problems. Here’s why that’s paying off for the streaming industry:

  • LLMs are trained on massive amounts of data, including public information, which makes them better at understanding what people are searching for, even if they use conversational language.
  • LLMs can also tap into public datasets to make recommendations based on trends and cultural context and enhance the media metadata available in a CMS. For example, for a viewer who logs on in the middle of a blizzard in Wisconsin, we could recommend a movie with beach scenes as an escape.
  • The CMS already has rich information on user preferences and behaviors, such as settings and viewing history. By applying powerful LLMs to the CMS, we can provide recommendations that continuously learn from user interactions and reflect viewers’ changing preferences and habits.
  • Combining an LLM with the CMS ensures that compliance standards remain in force. The CMS makes content available according to relevant rules, so users don’t receive recommendations that violate licensing agreements or regional restrictions. The CMS also enforces data privacy guidelines to ensure that data used to make recommendations is treated according to privacy regulations.

We’ve talked before about how architecture matters and the incredible capabilities that our five pillars – Open, Modular, Universal Orchestration, Dedicated Instance and Cloud-Native – unlock for OTT providers. Our flexibility, modularity and seamless integration of APIs enable straightforward implementation of LLMs, giving our customers a leg up in the Generative AI and discoverability arms races. Together, the Media Companion and Curator Assistant team up to give viewers and programmers more engaging content at a faster rate – a win-win on both ends.

The streaming industry has invested significant money and effort into developing and procuring content, but having great content is only half the story. At Quickplay we’re flexing our award-winning AI-powered products to get  content discoverability  issues out of the way. Our goal is to help viewers spend less time searching for content – and more time enjoying it.

Juan Martin