Encoding on the Edge for Cloud-Based Workflows
Live video encoding and broadcast contribution used to be fairly straightforward. All it took was connecting an SDI cable from a camera to a video encoding appliance. The video encoder would then compress the live content in H.264 or HEVC and send it over a satellite link, fiber connection, or IP network to a video decoder located in the main production studio.
With nearly ubiquitous access to broadband networks, broadcasters are now streaming their live video contribution feeds over public internet connections. What happens to the video stream, and which route it takes from source to destination, may depend on network conditions at a given time. However, from the perspective of the broadcast engineer the goal is the same – point to point streaming from video encoder to decoder.
Moving content to the cloud
Increasingly though, broadcasters are deploying real-time video processing, editing, content management, ad-insertion, and even artificial intelligence functions such as automated metadata generation and CDN video delivery optimization, within cloud-based instances. Postproduction, content storage, broadcast monitoring, and content distribution services are all moving to the cloud.
Producing live video in the cloud brings about many significant advantages in terms of flexibility and scalability. With cloud-based workflows, content owners can quickly spin up temporary TV channels for special events, create live OTT services,, and distribute content to affiliates without the need for satellite links. However, in most live broadcast scenarios, latency still needs to be kept as low as possible, ideally on par with traditional broadcast services, while maintaining costs associated with ingress (moving data to the cloud) and egress (out of the cloud).
SRT: Streaming to the cloud
Though some broadcasters may decide to rely on managed networks and direct-to-cloud over fiber services, using the public internet as a way to get live content to cloud-based workflows offers greater flexibility, cost efficiencies, and mobility for remote production.
Using the SRT streaming protocol, video encoders such as the Haivision Makito X4 can reliably and securely stream live content, including in 4K, as broadcast contribution feeds to REMI production workflows in the cloud. As high-speed internet access, including 5G services, becomes increasingly common, edge encoders and SRT are allowing broadcasters to cover remote events from anywhere.
By processing raw video on the edge with a low latency encoder, end-to-end latency can be kept below 1 second while ingress bandwidth is optimized. For high value content requiring 10-bit pixel depths and 4:2:2 chroma subsampling, edge encoding makes even more as latency and bandwidth consumption can be kept low.
Encoding video on the edge is key to fulfilling the promise of cloud-based broadcast workflows. Video encoders capable of low latency encoding, processing, and video streaming to cloud-based broadcast workflows don’t need to be complicated to setup and use as edge devices. Learn more about the Makito X4, and how it can be configured in 10 simple steps.