What is a video codec?

Video Encoding Basics: What is a Video Codec?

Overwhelmed and confused by some of the terminology and concepts that surround video encoding? Don’t worry, we’ve got you covered! In our Video Encoding Basics series, we’ll explore the fundamentals of video encoding with a high level explanation of key concepts. We’ll guide you through the basics of what you should know, dispel common myths along the way and predict future trends. So let’s get the ball rolling with codecs.

What is a Video Codec and What Does it Do?

The term codec is actually a portmanteau of the words enCOding (coding) and DECoding and describes a process for compressing and decompressing data as files or a real-time stream. For broadcast engineers, a codec usually refers to the compression standard used by a video encoder, decoder or transcoder.

Why is Compression Necessary?

When it comes to transporting and storing uncompressed raw video, this can mean a colossal amount of data to send over any connection. Given the constant struggle for bandwidth efficiency, compression significantly reduces the bandwidth required making it possible for real-time video streams or files to be easily transmitted across constrained networks. Video compression algorithms such as H.264/AVC or H.265/HEVC reduce the raw content data by as much as 1,000 times.

Video codecs explained

Encoding HD video for IP streaming

Video Compression 101

Compression techniques are used extensively across all elements of today’s computer and networking architectures for efficient storage or transport. The goal of a video codec is to intelligently reduce the size of video content, ie the total number of bits needed to represent a given image or sequence, while simultaneously maintaining picture quality. Video compression is commonly performed by a codec’s algorithm or formula for determining the best way to compress the data.

Compression Techniques

There are several different codecs and methods of compression, but the basic concepts remain the same. Most codecs use “lossy” compression methods which, at a high level, means that when a video is compressed, some redundant spatial and temporal information is reduced. “Lossless” compression is sometimes used when the goal is to reduce file and stream sizes by only a slight amount in order to keep picture quality identical to the original source.

Spatial reduction or intra-frame compression physically reduces the size of the data by selectively removing parts of the original data in a video frame. Temporal reduction or inter-frame compression significantly decreases the amount of data needed to store a video frame by encoding only the pixels that change between consecutive frames in a sequence. By grouping multiple frames within a group of pictures or GOP, inter-frame compression is the most common approach for video as it can significantly reduce file and stream sizes.

The Essential Guide to Low Latency Video Streaming

This white paper explores the fundamentals of video encoding, tips for optimizing workflows, and the benefits of different codecs and protocols.

Understanding Bitrates

Within the context of live video streaming, a video bitrate is the number of bits that are processed within a unit of time and is commonly measured in bits per second. In general, a higher bitrate will accommodate higher image quality in the video output. When compressed to the same bit rate, video in a newer codec such as HEVC, for example, will be significantly higher quality than an older codec such as H.264. Conversely, HEVC can deliver the same video quality at a lower bitrate than H.264.

Current Video Codec Options

MPEG-2
MPEG-2, is the precursor to H.264 and HEVC, has been around since the 1990s and was responsible for pioneering video encoding at the time with digital television and DVDs. Its compression algorithm allows for high quality but has less sophisticated compression algorithms as it was designed for the available compute power at the time. While MPEG2 video encoding is slowly being phased out in favor of new encoding standards it’s still used in many legacy applications and in over the air terrestrial broadcast systems such as ATSC, DVB-C cable systems, and satellite television.

JPEG2000
Introduced in 2000, JPEG2000 is a standard that’s still used today in a variety of applications including digital cinema, medical imaging, geospatial data and document archiving. JPEG2000 employs intra-frame encoding. This means each frame is individually compressed which ultimately means that compared to newer codecs which use inter-frame compression technology, the bandwidth requirements to transmit JPEG2000 are higher, with an increase in storage needs.

H.264 / AVC (Advanced Video Coding)
For high quality video streaming over the Internet, H.264 has been widely adopted and is estimated to make up the majority of multimedia traffic. H.264 has a reputation for excellent quality, encoding speed and compression efficiency. The first version of the standard was completed in 2003 and although extensions of its capabilities have been added in subsequent editions, it’s generally considered to be an aging compression scheme. As the demand for higher video resolution continues, further efficiencies are required.

H.265 / HEVC (High Efficiency Video Coding)
The successor to H.264, H.265 or HEVC, is fast becoming ubiquitous thanks to the proliferation of 4K content. At an identical level of visual quality, HEVC enables massively improved compression allowing video to be compressed at half the bitrate of H.264, making it twice as efficient. When compressed to the same bitrate as H.264, HEVC delivers significantly better visual quality.

VP9
Developed by Google and used by YouTube, VP9 is a royalty-free and open source codec which offers fewer benefits than HEVC. In terms of features, VP9 is weak in several key areas and with limited commercially available real-time VP9 encoders, it has little appeal as a contribution format.

Looking to the Future

While 2020 saw the finalization of two new codecs, their relevance should only become clear within the next few years, once hardware and economic factors are less cloudy.

JPEG XS is a visually lossless compression standard underpinning SMPTE-2110. JPEG XS is the ISO standardization of TICO compression technology, which is a lightweight lossless compression standard which is useful for 4K and 8K contribution workflows over existing 3G SDI and 12G SDI or hybrid SDI/IP networks. Although ideally suited for allowing for 4K signals to cross a 10 GB network for edge devices that cannot handle a full 12 GB connection, TICO only allows for limited configuration settings. JPEG XS looks to become a big factor in the remote production and live event spaces in the near future as it provides relatively low latency, low complexity, and is able to achieve support for up to 8K over IP. 

VVC (Versatile Video Coding), or H.266, is a next-generation compression standard, following in the footsteps of HEVC and was finalized in July 2020. VVC is focused on achieving 30 to 50 percent better compression efficiency compared to HEVC. However, we may only see the first implementations into consumer hardware in mid-2022 or even later.

AV1, developed by the Alliance for Open Media (OPM), shows great promise as a new open and royalty-free standard. However, initial test deployments require significantly more computational power to manage the additional complexity compared to HEVC. For the moment it is a work in progress and there are simply not enough details available to predict whether the broadcast industry, as well as device manufacturers, will adopt AV1 as a standard video codec. 

LCEVC (Low Complexity Enhancement Video Coding) or MPEG-5 part 2 is a composite technology developed by V-Nova that uses existing codec types such as HEVC or AVC to create a base that can be enhanced to improve the efficiency of the existing codecs. LCEVC was finalized and promoted as an international MPEG standard in October 2020 and out of the new codec technologies, it could have the biggest impact on streaming by late 2021 or early 2022.

EVC (Essential Video Coding) started development in 2018 and continued throughout 2020 with no finalization target date. EVC seeks to provide an alternative codec, similar in performance to HEVC, but with different licensing arrangements (it won’t be free) and mostly targeting 4K workflows, and those with HDR. If EVC is finalized in the early stages of 2021, its adoption could still only occur in late 2023.

Which Codec is the Best One For You?

You may have heard of the so-called “codec wars”, where codecs are supposedly battling it out to take the top spot. But the fact of the matter is that it simply boils down to the demands of the target application and viewing devices in which it will be used.

For companies with a focus on live video workflows that bring value to broadcast contribution and distribution, HEVC is the current obvious choice. It’s baked into billions of chipsets, from encoders and TVs to set-top boxes and mobile devices, making it a popular and realistic option for contribution, transcoding and distribution. For established digital broadcast systems, H.264 may still be required. In most cases, broadcast engineers need to support both codecs.

The Haivision ecosystem of products supports both HEVC and H.264 and includes support for the Secure Reliable Transport (SRT) open source protocol. SRT is fast becoming the de facto low latency video streaming standard in the broadcast and streaming industries. What’s more, unlike some other solutions that only support specific codecs, SRT is codec agnostic, allowing users to future proof their workflows.

The Makito X4 Video Encoder

Learn the ins and outs of the Makito X4 Video Encoder in our comprehensive Makito X Series datasheet.

Share this post