Chat with us, powered by LiveChat
Live Video Streaming Transport Protocols

Video Encoding Basics: Live Video Streaming Protocols for Broadcast Contribution and Distribution

From RTMP to SRT, HLS, MPEG-DASH and CMAF, the list of video transport protocols currently available is extensive and can be confusing. As part of our Video Encoding Basics series, we’re attempting to demystify some of the fundamentals of video encoding and in this post, we’ll take you through what a video transport protocol does and where it’s used in the video delivery chain. We’ll also explore the different protocols currently available as well as their relative merits and shortcomings.

What Is a Transport Protocol?

Let’s start off with the basics. Essentially, a transport protocol is a communication protocol responsible for establishing a connection and delivering data across network connections. More specifically, transport protocols occupy layer 4 of the Open Systems Interconnection (OSI) protocol model. The International Standards Organization’s OSI model serves as a standard template for describing a network protocol stack. The protocols at this level provide connection-oriented sessions and reliable data delivery. Transport protocols provide data delivery guarantees that are essential for file transfers and mission-critical applications. Importantly, different transport protocols may support a range of optional capabilities including error recovery, flow control, and support for re-transmission.

When Are Video Transport Protocols Used?

In video workflows, transport protocols are used for contribution, production, distribution and delivery. The diagram below illustrates these four distinct phases in the live video delivery chain, contribution is sometimes referred to as “the first mile” and delivery as “the last mile.” This last mile phase is where video streams are delivered directly to end user viewers on their chosen device or TV screen. For delivery protocols, you may have heard of HLS, MPEG-DASH and CMAF which are HTTP based and not applicable for first mile video streaming as they introduce too much latency for live broadcast production facilities.

Live video over IP delivery chain

In addition, there are also proprietary protocols available which are owned by one company and usually require a license to be used by customers or third-party vendors. The challenges of using a closed proprietary protocol are that it can be costly (requiring ongoing monthly payments), it promotes vendor lock-in (preventing interoperability between other manufacturers’ devices) and, there’s always a risk of orphan products when vendors discontinue a product line or go out of business.

For the purposes of this blog post, we’ll be focusing on the video streaming protocols available for public use and that include features critical for first mile video streaming such as low latency, packet-loss recovery and content encryption.

Transport Layer Protocols

TCP – Transmission Control Protocol

TCP is the most commonly used transport protocol on the internet and guarantees that the recipient will receive the packets in order by numbering them. The recipient sends messages back to the sender saying it received the messages. If the sender does not get a correct response, it will resend the packets to ensure the recipient received them – because of these acknowledgments, TCP is considered to be very reliable. Packets are also checked for errors. TCP is optimized for data integrity rather than timely delivery which means that packets sent with TCP are tracked in such a way that no data is lost or corrupted in transit. This can result in relatively long delays while waiting for out-of-order messages or re-transmissions of lost messages, making it unsuitable for real-time, low latency applications for live video.

UDP – User Datagram Protocol

The UDP protocol works similarly to TCP, but without the error checking that slows things down. When using UDP, “datagrams” which are essentially packets, are sent to the recipient, the sender does not wait to make sure the recipient received the packet – it will just continue sending the next packet. Sometimes referred to as a “best effort” service, if you are the recipient and you miss some UDP packets, too bad. There is no guarantee that you are receiving all the packets and there is no way to ask for a packet again if you miss it. The upside of losing all this processing overhead, however, means that UDP is fast and is frequently used for time-sensitive applications such as online gaming or live broadcasts where perceived latency is more critical than packet loss. 

Application Layer Protocols for Video Streaming

RTMP – Real-Time Messaging Protocol

Initially, a proprietary protocol, RTMP was originally developed by Macromedia (now Adobe), for real-time streaming of video, audio, and data between a server and Flash player. Although Adobe announced that it will no longer support Flash, RTMP remains a commonly used protocol for live streaming within production workflows. Based on TCP, RTMP is a continuous streaming technology and relies on acknowledgments reported by the receiver. However, these acknowledgments (ACKs) are not reported immediately to the sender in order to keep the return traffic low. Only after a sequence of packets has been received, will a report of ACKs or NACKs (negative acknowledgments) be sent back. If there are lost packets within that sequence, the complete sequence of packets (going back to the last ACK) will be retransmitted. This packaging process can dramatically increase end-to-end latency. In addition, RTMP does not support HEVC encoded streams or advanced resolutions as it cannot be used at high bitrates due to bandwidth limitations. It’s worth noting that there are several variations of RTMP including RTMPS which works over a TLS/SSL connection.

RTP – Real-Time Transport Protocol

RTP is an internet protocol for real-time transmission of multimedia data in unicast or multicast mode. RTP runs over UDP for low latency and though it does not include packet-loss recovery it has mechanisms to compensate for any minor loss of data when used in conjunction with the RTP Control Protocol (RTCP). While RTP carries the media streams (audio and video), RTCP is used to monitor transmission statistics and provide quality of service (QoS) feedback (ie lost packets, round-trip time and jitter) and aids synchronization of multiple streams. RTP is also the transport protocol used by WebRTC, an open framework peer-to-peer protocol with a JavaScript API for sharing video between browsers. Although it works well for video conferencing between small groups or for multicast streaming on a closed network, WebRTC is limited in its ability to scale and reliably stream broadcast-quality video.

RTSP – Real-Time Streaming Protocol

RTSP is an application layer control protocol that communicates directly with a video streaming server. RTSP allows viewers to remotely pause, play, and stop video streams via the Internet without the need for local downloads. RTSP was most notably used by RealNetworks RealPlayer and is still being applied for various uses including for remote camera streams, online education and internet radio. RTSP requires a dedicated server for streaming and does not support content encryption or the retransmission of lost packets as it relies on the RTP protocol in conjunction with RTCP for media stream delivery.

SRT – Secure Reliable Video Transport

Pioneered by Haivision, SRT was developed to deliver the low latency performance of UDP over lossy networks with a high performance sender/receiver module which maximizes the available bandwidth. Codec agnostic, open source and royalty-free, SRT guarantees that the packet cadence (compressed video signal) that enters the network is identical to the one that is received at the decoder, dramatically simplifying the decoding process. SRT offers additional features including native AES encryption so stream security is managed on a link level. It also allows users to easily traverse firewalls throughout the workflow by supporting both sender and receive modes (in contrast to both RTMP and HTTP that only support a single mode).

In addition, SRT can bring together multiple video, audio, and data streams within a single SRT stream to support highly complex workflows without burdening the network administrators. Within the sender/receiver module, we incorporated the ability to detect the network performance with regard to latency, packet loss, jitter, and the available bandwidth. Advanced SRT integrations can use this information to guide stream initiation, or even to adapt the endpoints to changing network conditions.

If you’re interested in learning more about SRT open source streaming for low latency video, download this technical overview and learn about its widespread adoption in the broadcast and streaming industries.

Share this post