Leveraging Segments for Dynamic Video at Scale (Manipulation)10:40 AM to 11:00 AM
This talk will take a look at what it takes to create dynamic and personalized experiences at scale, without breaking the bank or the environment. We're taking a look at all the segments: ingest, encoding, manipulation, delivery, playback and analytics. Many people agree that the video experience of the future will be dynamic and personalized. Not just content discovery, thumbnails, or maybe a dynamic UI, but the video itself. There are many reasons for creating dynamic video content, such as increasing inclusiveness, relevancy, engagement, and advertising. So calling this the holy grail of video is not an exaggeration. Currently there are some standards out there we could use to define dynamic video content, like IMF. This already defines dynamic video for localization and compliance purposes, but only at the distribution stage, and the dynamic aspects of it disappear after the ingest process which turns it into a static asset. In this talk, I'll discuss how we can leverage this standard and expand on it, to describe video on a granular per-scene level, and build a streaming workflow that supports this. I'll also discuss the problems that we still have to solve before there can be mass-adoption of this technology. Some are obvious and simple to solve, others not so much. Finally we'll take a look at how dynamic video workflows are a prerequisite for using technologies like 3D engines and Generative AI for content personalization in the future. And how production and distribution will need to be more connected to each other than ever before.