unpredictable networks

How to Live Stream Over Unpredictable Networks

Anyone involved in professional video streaming knows that sending video over networks is a tricky game. There are a lot of factors that can contribute to video stalling at the endpoint. It’s become a fact of life for many of us.

Knowing this, we’ve put a lot of work into making sure your live stream never fails. We’ve developed a new feature for our Makito X encoder that’s called Network Adaptive Encoding. This feature ensures that no matter the bandwidth availability, the stream keeps going.

So, let’s take a quick look at some of the problems we all face, and get into what we’re doing to help you keep your live streams going through to the people who need to see them.

What is latency, and when does it matter?

Most of you reading this will probably already be familiar with latency, but to break it down for everyone, latency is simply the time it takes for a video to get through a computer or any other device. Once it gets from the source to the device, it has to be processed, and the time that it takes to do that is called latency.

Latency can be caused by the devices in any workflow, such as the camera, the encoder, and everything in between there and the viewer. That includes that network that a signal is being sent over, whether it’s public internet, satellite or anything else.

Latency can range from multiple minutes in the case of Over-the-Top (OTT) distribution down to milliseconds in the case of live and real-time streaming applications.

We can use broadcast as a benchmark for latency. We’ve all seen the interviews where you’ll see several seconds of lag in between an interviewer asking a question, and the interviewee responding. This is the latency caused by the devices encoding and decoding the video, and even the network in between.

For interactive and real-time applications, several seconds of latency is unacceptable. It needs to be milliseconds at the most. It doesn’t matter if it’s an all hands live stream for a corporate event, or a camera feed from a machine that’s being operated remotely, the feed needs to be almost immediate. In the first case, low latency contributes to instant feedback, while in the second, it’s paramount to the safety of the operation of the device.

So, how do you make sure that the stream always gets to the endpoint?

When your video absolutely, positively has to get there

We all know that networks, especially the public internet, can be unreliable. Even your own internal network can get ‘choked’ in certain situations, which could make streaming video to your employees on that network a difficult task, or even an impossible one.

We know that once you start streaming live, you want your stream to gets to the viewers, so we’ve developed a new feature called Network Adaptive Encoding that makes sure that no matter what your network conditions are, the video will get through.

Network Adaptive Encoding detects changes in available bandwidth, and makes adjustments to the video bitrate accordingly. So even when you’re network chokes, you’ll still get a stream through to the endpoints.

Take a look at an explanation of Network Adaptive Encoding with a short demonstration in this video below.

Learn how Network Adaptive Encoding can work for you

We’ve recorded a webinar that explains the problems for those of you involved in broadcast and internet streaming workflows that goes deeper into the problems that you face, and how we solve them.

You’ll learn about how we solve the problems of unpredictable network conditions in video transport, in particular. And you’ll see how you can adapt your streams to unpredictable conditions automatically, ensuring that your video always gets through. You can watch it now for free. Click the image below to get started.

Low latency streaming

Share this post