What You Need to Know About the Future of Video Streaming Technology

Every single year there are advancements made in video streaming technology. So what should you be paying attention to in 2018? We asked three people who’ve been in the video streaming industry for many years about what they think are the main trends to keep an eye on in the upcoming year. What will be the big advancements, and what should those who are involved in video streaming technology be doing to help make the industry better? Let’s take a look at what our pundits had to say, and what they think we should all be paying attention to in 2018. Look for pushes in AI, metadata, and video streaming micropayment options Marc Cymontkowski is the senior director, core technology at Haivision. He has been developing video streaming tech since 1999, and was instrumental in creating SRT and helping it to go open source. And there are three areas he thinks we should be paying attention to in 2018. First, he believes there will be a great push towards using AI to further enhance the quality of video on demand (VOD) and live video streaming. Accelerators and hardware are expensive. A way around this may be to send uncompressed video through an AI engine, Marc says. In fact, a team at Carnegie Mellon has developed something called “model predictive control” (MPC), which predicts changes in network conditions, and then optimizes based on the model it creates. At MIT, they’ve created a neural network to improve video streaming. Dubbed “Pensieve,” the AI system “uses machine learning to pick different algorithms depending on network conditions,” according to an MIT News article. This allows them to deliver higher-quality streaming experiences that do not buffer as much as many existing systems. While other technologies like our own Network Adaptive Encoding are already reacting to bandwidth fluctuation, the current AI research is still in its infancy. The bulk of the research now focuses on VOD right now, but AI could, at some point, be used in live scenarios. Watch for it! Next, Marc is looking at paying more attention to metadata. Crucial to helping viewers — and those who use video as resources like editors and archivists — to find exactly what they’re looking for, there should be a major shift towards enabling metadata in much deeper and meaningful ways in the coming year. Interestingly, AI could also help to solve the problem of making metadata more readily available. When dealing with rich content like video, AI will be a big help in ensuring that all available metadata can be parsed. This could even include video image recognition that could go as far as being able to recognize emotions in people on videos, and allow people to search based on that criteria. Moreover, “AI improves finding and organizing unstructured digital content, whether that is professional video, digital surveillance video, health and genomic data or engineering and scientific content. AI can be used for intelligent allocation of content into a storage hierarchy depending upon the probability of need,” according to an article on Forbes. As Marc says, “Content is good, but if you don’t know where it’s from, it does no good.” Enabling those who work with video to be able to perform more precise searches in order to find what they’re looking for will be a major boon to the industry. Finally, Marc predicts we will see a move towards blockchains for video streaming micropayments. A model in which viewers can use cryptocurrency to pay only for the content they consume — as opposed to monthly or yearly subscriptions — would benefit both the viewer and the business. For any online business, security is key. Security in blockchain tech is robust, to say the least. This would give more businesses the confidence they need to get into the industry, and also be assured that they’re able to keep their consumers’ payment information safe and secure. And for viewers, micropayments allow them to only pay for what they’ve watched. Not interested in watching anything that month? Don’t pay for it. For the time being, Marc is willing to watch those who are investing in this technology, and we’ll all keep an eye out for advancements in AI, metadata, and blockchains for video streaming payments. Focusing on making codecs perform better in 2018 Mahmoud Al-Daccak is Haivision’s EVP of product development, and CTO. He’s been developing technology for the past 25 years, and has been at the forefront of a lot of great advancements in video streaming. Mahmoud and I started our conversation by discussing codecs — the movement from MPEG-2 to H.264 and then to HEVC. Mahmoud is quick to note that, despite the advancements made in those codecs, the compression fundamentals have not changed at all. What has changed is that new codecs are equipped with added tools and complexity. And we can add those new features because of the processing power now available to us. For example, the processing of HEVC takes 5-6 times the amount of processing power as it did for H.264. But our codecs are getting more efficient: we are able to achieve ever increasing quality at the same bit rates, or maintain similar quality at reduced bit rates. So, the focus for 2018, according to Mahmoud, is not just the codec, but how you can help the codec to do a better job. One example might be dealing with high-frequency noise coming into your stream. You might be able to implement certain intelligent or “smart” filters to remove that. That will enhance the quality of the image tremendously for any codec. There are a lot of clever things that can be done. One idea that Mahmoud suggests include smart filtering on the raw image that will help the codec to perform more efficient encoding, while at the same time preserving the perceptual quality so that the end result is high-quality viewing. You could also add smart object classification, leveraging AI to identify these objects and treat them according to
Why So Many Companies Are Joining the SRT Alliance (And Why You Should Care)

It has been just a few short months since the formation of the SRT Alliance, an open-source initiative that is changing the way that video is streamed over the internet. With technology originally developed and made available as open source by Haivision, the SRT Alliance now counts 70 members that have jumped on board to help ensure that high-quality low-latency video is available to anyone who wants to stream it. Earlier this week, we announced that Kaltura has joined our ranks along with more than 30 broadcast and streaming companies. Since NAB 2017, the momentum of open source SRT has really taken off so we thought we should take a moment to look at SRT in depth here, and get into how and why it was developed. We’ll learn why it’s so important right now. And we’ll answer the burning question: why is SRT open source. So read on and find out the history of SRT, where it is, and where it’s headed (maybe with your help via the SRT Alliance)! So, what the heck is SRT? SRT is short for Secure Reliable Transport. It’s a technology package and protocol that connects two endpoints or serves as a contribution feed for the purpose of delivering high-quality, reliable, low latency video and other media streams across unpredictable networks, including the public internet. That’s it in a nutshell. Read here for a more in-depth definition of SRT. Check it out on GitHub here. SRT optimizes streaming performance across unpredictable networks with secure streams and easy firewall traversal, bringing the best quality live video over the worst networks accounting for packet loss, jitter, and fluctuations in bandwidth saving you time and money. Pretty cool. The beginnings of SRT The first iteration of SRT appeared in the Haivision HaiGear Labs (our group focused on innovation and future video streaming technologies) at IBC in 2013. Early development was led by our senior director of core technology Marc Cymontkowski with input from our CMO Peter Maag and CTO Mahmoud Al-Daccak. They had all come to the realization that, in order for a video transport protocol to be truly successful, it needed to be able to send TS streams, not only within corporate networks but also between them. There was a lot of research that went into looking into third party solutions, but none could cut it. That’s when a decision was made to develop our own protocol for low latency video transport. Since Marc implemented public internet transport technology based on the UDT library in the past, he started doing some new tests. He had some ideas on how to bring the latency all the way down but that required low level network programming. So we pulled in one of our great video experts at Haivision who added a functionality for fast packet retransmissions. He then added an encryption he had designed before in products that actually achieved DCP LLC approval. A couple of experiments were done with the new solution but the quality wasn’t convincing in low latency cases. Marc believed he knew why. One of the most important features of SRT is its ability to recreate the low level timing of the input signal from the output/receiver side which is critical especially for VBR streams. That did the trick and a ton of optimizations followed; rewriting of the whole packet retransmission functionality, mechanism for stream reshaping, and our encryption protocol; and then SRT was born. Why does the world need SRT? Studies suggests that humans now have an attention span shorter than a goldfish – clocking in at less than 9 seconds. And the attention span is even WORSE for online viewers. According to Limelight’s (an SRT member, by the way) State of Online Video study, “video buffering remains the top frustration with online video viewing, with almost half of online viewers abandoning a video if it stops playing to re-buffer more than twice.” For live streaming, it’s not only critical that your video performs well but at the lowest latency, or you’ll lose viewers. Overcoming today’s low latency video streaming challenges usually means spending A LOT of money on reserved links like MPLS or satellite networks, not to mention the time it takes to set up your network, the annual contracts, scheduling, downlink provisioning and ensuring stream traversal with no firewall issues. Wouldn’t it be great if you could just use the network you have, like the public internet? Yes, absolutely, and SRT makes it possible. SRT ensures low latency, and secure video delivery from one point to the other, over lossy networks, unreliable connections and public internet. When you’re watching something live, is it really live? Have you ever watched a live stream online and it starts fast forwarding to catch up on words? Whenever you stream live, all sorts of problems can start to pop up; bandwidth fluctuations, jitter, packet loss, latency, etc. Traditional broadcasting protocols like UDP would rather drop than wait, and RTP is just insensitive to packet loss. SRT won’t leave you sacrificing resolution or latency; it lets you have it all so you can successfully deliver secure low latency video over any network. Why use SRT over other protocols? When competing against other protocols in any internet delivery system, one has to consider the end results of each. Here are your options, in a nutshell. Via TS-UDP w/FEC – Stay fast, but risk accidents: It gives you limited packet loss recovery and is poor with jitter resulting in choppy, blocky, reordered video leaving you scratching your head wondering what you just missed, or screaming at your computer “this game cheats!” Via HLS/RTMP over TCP/IP – Arrive later, stay safe: Offers typical 5 to 30 seconds of latency (average backhaul contribution is less than 500ms) and has potential for TCP congestion which means you might lose your audience’s attention span while your stream is trying to figure out if everything has been received for transmission. What’s the end result of using SRT? Aside from the fact that