V-Nova – At-home Production, Contribution, and Distribution over Managed Networks [Webinar Recording]
Select Page

V-Nova – At-home Production, Contribution, and Distribution over Managed Networks [Webinar Recording]

Last updated Jun 29, 2023 | Published on Jun 9, 2020 | Podcast, Webinars

V-Nova – At-home Production, Contribution, and Distribution over Managed Networks

 

Download Presentation & Watch Recording

 

 

 

  • MPEG-5 LCEVC – Low Complexity Enhance Video Coding
      • A new standard that alleviate the challenges of video streaming on a massive scale
      • Data stream structure defined by two-component streams
        • A base stream decodable by a hardware decoder
        • An enhancement stream suitable for software processing implementation with sustainable power consumption.
          • New features such as compression capability extension to existing codecs
          • Lower encoding and decoding complexity
        • On demand and live streaming applications.
  • Developments with SMPTE ST 2117 VC-6
    • Codec of choice for contribution, distribution and remote production workflows.
    • More efficient alternative to JPEG 2000, JPEG XS & Uncompressed
  • P.Link with AWS MediaConnect and Direct Connect
    • High quality, low latency media transport without the need for costly dark fiber links

Contribution Encoder

 

Jim Jachetta (00:02):

Good morning, everyone. I’m Jim Jachetta, CTO and Co-founder of VidOvation. Today we have my good friend, Matt Hughes from V-Nova. He’s Senior VP of Global Sales. Give us a little overview Matt of what knowledge you’re going to lay on us today.

Matt Hughes (00:23):

Yeah. Thanks, Jim. And thanks for VidOvation for setting this up, and it’s important for us to have a chance to talk with you guys about V-Nova. We’re a UK London-based company, in the process of expanding as well. What we do is we’re a compression technology company. We own a bunch of patents around video compression and image compression. What we’ve done is, we’ve taken our library and perfected them to various aspects of the broadcast industry. We’ve got a few ways that we’ve worked, what I’m going to go through today, is show you some of those new technologies that are being advanced through MPEG and SMPTE, and also products that we have that offers encoding and decoding a server based system that we have in the market.

Jim Jachetta (01:17):

Well, some of your technology you’re going to present Matt is kind of a new approach or a contrarian approach so I put together this little questionnaire to kind of see where customers are at. Everyone should be able to see. We asked the question, what video codec or codecs do you use today and this is a multiple choice. It’s multiple choice but you can also pick multiple things. So if you use uncompressed and JPEG 2000, we’d love to hear from you, just kind of see where you folks are at today in your IP video codec uses right now.

Jim Jachetta (02:03):

So it looks like, Oh, wow. It’s pretty evenly distributed here. Looks like even except for JPEG XS. So here, let me put this up. Oh, what did I do? Ah. I hit the wrong button. Hold on, hold on. Close it. Now I share, there we go. So there’s the results. I guess that’s not surprising. A lot of people are still using H.264. I don’t want to spoil the surprise but I think Matt might have some enhancements to make 264, take something legacy like that and improve it. We got 265 and uncompressed and then the JPEG 2000 folks. Matt is going to present maybe some alternatives to that. So here if I hide that. Thank you folks for sharing. Let me see. You should have control again, Matt.

Matt Hughes (03:06):

Okay. Alright. Thanks Jim. Some of the technologies that we’ve been working on specifically have been for MPEG and SMPTE. We’ve got a new standard that’s part of MPEG-5. MPEG-5 is actually a suite of products and part of that suite is called MPEG-5 Part 2 with LCEVC. And with SMPTE, new standard is VC-6 or SMPTE ST 2117. I’m going to go through both of these and kind of explain where we are today with these two technologies. One of the things that’s majorly happening and you guys see this everyday is, in your typical home, you’ve got people watching, people on Zoom, people on GoToMeeting, Netflix is running, Prime is running, Twitch, YouTube, somebody is on a bicycle doing Peloton. You’ve got a lot of streaming, well, I’m not, but some people are.

Matt Hughes (04:04):

You’ve got a lot of streaming services happening inside the home. 65% of the worldwide mobile downstream traffic is video and 11% is Netflix. I mean, it just kind of gives you an idea, where does that lie? It’s going through a massive strain right now, the internet. So, how do we make that better? In the US, it’s Netflix, which is at the top in Europe. In the Middle East and Africa, you’ve got YouTube. In Asia, you’ve got media streamers. All these are taking up all the bandwidth along everywhere in the world. Viewers expect that high quality. So no matter what service you’re on, you still want to be able to see your video. You don’t want to see a spinning wheel. All the devices, everything that’s being used, you want to have a good experience, a good quality of experience. So high demand, good experience, which means… typically in the past, it meant using a lot of bandwidth. How do we cut that? How do we make that bandwidth smaller? And what can we do to offer our customers or the general public consumers a better experience?

Matt Hughes (05:23):

Low-latency is also critical. You want to be able to have real-time video. You don’t want to wait, the gap structure way of working can take a lot of time away, making more latency within the workflow. How do you remove that? How do you make it, so you don’t have to have a massive gap structure with your streaming services. How does that work? Our solution is LCEVC and of course, right now I’ve got a helicopter going over my house, it’s not my helicopter.

Jim Jachetta (05:57):

We can’t hear it.

Matt Hughes (05:57):

You guys will hear it soon because it’s [inaudible 00:06:01]. Anyway, it’s London. Can you hear it in the background or still pretty good?

Jim Jachetta (06:07):

Still pretty good.

Matt Hughes (06:10):

That’s impressive because it’s really low.

Jim Jachetta (06:12):

Oh, there you go. Now I hear it.

Matt Hughes (06:14):

[inaudible 00:06:14]. This always happens. LCEVC is Low Complexity Enhancement Video Coding, which means we’re taking a base codec, so the existing code that you already have like H.264 or could be HEVC or could be AV1 and we’re enhancing that. Making the enhancement on top of that existing codec, we start with a base codec which is really important for this technology. That’s how we differentiate from other players out there. In MPEG-5, the suite of MPEG-5 products is, this was important for that suite, because we didn’t want to make a new codec and MPEG didn’t want to have a new product, another new codec out in the market. So what can we do with existing technology and new technology that comes or how can we make that better?

Jim Jachetta (07:10):

Basically we could take that legacy AVC/H.264 and give it new life, supercharge it as it were, without having to throw out that investment of H.264 infrastructure?

Matt Hughes (07:29):

Yeah. If you put in a massive headend with encoders and you don’t want to have to switch those out, because you need a larger encoding firm at your headend, you don’t want to have to do that. You can keep what you’ve got existing today and then upgrade that workflow with this enhanced codec, which is really important.

Jim Jachetta (07:50):

Well, another thing that I like too is that not every subscriber… If it’s like an OTT application, maybe some customers haven’t updated their set-top box yet to support the new standard, they’ll still get the base H.264, the secondary stream or the supplemental stream will just be discarded or ignored. Right? So it-

Matt Hughes (08:12):

Exactly.

Jim Jachetta (08:14):

… it scales nicely.

Matt Hughes (08:16):

Yeah. If you’ve got the base codec and the base codec, let’s say is doing your SD version of your channel, and then your HD and UHD are added with this MPEG, LCEVC, you would still have that base codec. So you still have a base H.264 that’s been sent to your set-top box, the ones that haven’t been upgraded or they weren’t upgraded, or it could be somebody paying for prime content that’s in HD or in UHD, however somebody wanted to work that. It’s low complexity. This means it reduces those processing costs of encoding on the codec. It’s a software plugin update, as you mentioned. So it’s a software plugin, we’re not changing the hardware. It is specifically software for players and existing devices.

Jim Jachetta (09:09):

The acronym again, it’s Low Complexity… What does it stand for again?

Matt Hughes (09:15):

Enhance [inaudible 00:09:16] codec.

Jim Jachetta (09:19):

Yeah. Your audio is dropping out a little bit. Low Complexity Enhanced Video Codec?

Matt Hughes (09:24):

Enhancing Video Coding. Coding.

Jim Jachetta (09:27):

Coding.

Matt Hughes (09:27):

Not a codec.

Jim Jachetta (09:29):

Not codec.

Matt Hughes (09:29):

We want to be sure. This isn’t a codec.

Jim Jachetta (09:31):

It is not a codec because it can’t stand on its own. Okay. So it’s enhancing coding.

Matt Hughes (09:38):

Coding.

Jim Jachetta (09:39):

If the primary stream went away, this of course couldn’t stand on its own or the base stream. Got it.

Matt Hughes (09:46):

Yeah. Solving this high quality streaming at scale. When you’re scaling up, reduce the network traffic without changing the quality, delivering the highest quality and providing low-latency, those are keys. And the benefits for the viewers, for the consumer: killing that buffering wheel, delivering the highest quality and having a fast start-up time, for those adverts, for creating ad revenue, which is really important. For service providers, you want to be able to beat the competition, you want to be the one that’s not waiting for this massive gap structure to happen for your breaking news to happen, or you want to be able to make sure your customers aren’t turning, going into another service because it’s better quality. You want to be able to reach everyone.

Matt Hughes (10:35):

So those people that may be at a 2G, 3G network and they’re not yet on a 4G or that are going to be on a 5G, you still want to be able to target those people that are on 2G, 3G, 4G networks, with still very good quality video. And you want to minimize those operating costs. You want to bring down your all packs. You want to bring down your CDN delivery costs and lowering that encoding infrastructure that you would have to pay for something brand new, which would cost a lot of money.

Matt Hughes (11:05):

Here’s an example of 1.4 megabits per second. On the left-hand side, we have x.264, on the right-hand side, this is LCEVC using x.264 as the base. You can see a big change within the quality. The quality of the sign, the yard lines here, you can see difference in the numbers, you can see differences where it starts to break up [inaudible 00:11:33] as well. This is one of the examples that 1.4 megabits per second.

Jim Jachetta (11:40):

What’s happening here? That secondary stream or that secondary coding is adding some of the missing elements, some of that missing detail back into the image. Correct?

Matt Hughes (11:51):

Exactly. We’re taking that and we’re adding it and correcting it as well and improving it. We’re making that improvement. The source is still a high quality source, no matter where you get the source. The way that it degrades on the left-hand side, is different than how we’re adding the enhancement on the right-hand side.

Jim Jachetta (12:14):

The enhancement is kind of like an error signal. It’s looking at the original video, looking at how well that 264 is doing, and then kind of coming up with a difference and then sending that as the added information?

Matt Hughes (12:30):

Exactly. And then rebuilding it on the decoder, on the other end.

Jim Jachetta (12:35):

That’s awesome.

Matt Hughes (12:36):

Where can LCEVC… is hard to say [crosstalk 00:12:40]-

Jim Jachetta (12:40):

Yeah. It’s hard to say hard to say fast right?

Matt Hughes (12:42):

Where can LCEVC be used? We’re looking at this for HD telepresence. So video conferencing, this is a prime example. We’re talking to the likes of Zoom and GoToMeeting and Teams, where this can fit in. Live Sport. Providing UHD live sport at HD bitrates or lower, and providing those HD video at even lower bitrates. Immersive VR, Cloud gaming, Smarter Cities, IoT, anywhere there’s a signal, this can be used.

Matt Hughes (13:19):

Hyper-scalers. One of the things that we’ve seen working with Xilinx, we’re now backed into parts of the Xilinx chipset, Xilinx manufacturer and FPGA, which is part of encoders anyway. What we’ve done is, we’re able to provide our software library to Xilinx. Xilinx has implemented this on their FPGAs which is great news.

Jim Jachetta (13:46):

Hardware manufacture, if they design a Xilinx FPGA into their video processing hardware, they buy a license from Xilinx or from you? How does that mechanism work?

Matt Hughes (13:59):

Typically they’ll buy that from Xilinx because they’re buying their boards from Xilinx and it’s embedded into that board. It’s normally transparent to anybody that would be buying an encoder at that point so it’s transparent. It’s just another available coding platform or codec available on a dropdown from the encoder.

Jim Jachetta (14:27):

Got you.

Matt Hughes (14:27):

Video calling and conferencing, which I mentioned before are very important for us right now to ensure that runs smoothly as well.

Jim Jachetta (14:39):

Well, I think your connection today could use a little LCEVC right now. You know we-

Matt Hughes (14:44):

You’re Probably right.

Jim Jachetta (14:46):

Matt and I, we live in the video encoding space. We have no control over GoToWebinar. Matt’s video’s looking a little soft today. We’re not sure if it’s a bandwidth issue or what, but yeah, we need to get GoToMeeting up to speed on LCEVC.

Matt Hughes (15:05):

Yeah, definitely. VoD streaming. This can work on file base infrastructures as well so not just on live video, this is can be implemented on saving file size for VoD content. The Netflix of the world, the Amazon Primes of the world. This is important for any VoD content or catch-up TV as well.

Matt Hughes (15:28):

Live streaming, which we’ve talked about, live sports. Probably the best use case for us because this is live coding. The benefits, you have more throughput increase, quick and easy integration and we have a broad device support currently. We’re working on making that larger and for a larger group. This is supported on iOS, Android, HTML5, web browsers, some set-top boxes as well that we’ve worked directly with. This can be used existing today, we have those libraries available. We’ll show you how you can actually see this in [crosstalk 00:16:09]-

Jim Jachetta (16:08):

Is it complicated to get the SDK in the library up and running? How quickly can this be implemented?

Matt Hughes (16:20):

We’ve packaged it into an SDK. And because it’s software, it’s written to be able to be used within the app level. Within the app itself, you can download a player that supports this, so you’re using a specific player, and we’ve got a list of players that you could make the integration with. It really depends on what device you’re working on and what library that you would need for this. But at the moment, we’re building those up. And where we are in the process… and I’ll get to this a little bit further, this is what we’re talking about deploying. We’ve got libraries for HEVC, VCC, VP9, AV1, and then all of these devices, Android, Chrome, so it works with the current ecosystem. That’s how LVC has been developed and we wanted to make sure that it was easy to access and easy to deploy.

Matt Hughes (17:23):

How do you get LCEVC? Which kind of goes back to your question. You can ask your encoder manufacturer. Whatever your encoder is, go back to your encoder manufacturer and say, “Look, I’m interested in testing this.” We have libraries available. We’ve done some work with AWS Elemental, Haivision, Telestream, Wowza, Haivision. We’ve done a lot of work with these companies already. Some of them have some earlier versions of our library. Some of them will be taking on the new LCEVC as well. There’s a few on this list that already have, so we’ve got this SDK available for all vendors, but do go to your encoder vendor and ask them for it and they can get it for you.

Matt Hughes (18:11):

We have trial deployments as well. If you wanted to test this, we’ve got some trial deployments. If you want to take a file and see what it looks like in a comparison between LCEVC with 264 and 264 on its own, we can do that comparison. You just upload the file, the VoD content. We can also, if we need it, we can take a live channel and put that into our platform. We’ve got a platform that’s available on iPhones. We can take that, put it on a iOS device or an android device or whatever you want, or we can take a live channel and you can say, “Okay, I want to see what this looks like. I want to really bring my bitrate down to 100 kilobits per second and see what that looks like.” Yes, we do do 100 kilobits per second video and that’s proven within our application. So you can see very low data rates, very little bitrates using LCEVC.

Jim Jachetta (19:12):

I know you’re working with our other partner, Haivision. They’re one of the global leading bonded cellular providers. Their existing HEVC codec is very good at low bitrates. I think they can go probably lowest out of the competition. Implementing your technology, they could go even lower. [crosstalk 00:19:35] that secondary channel coming through to add back some of that missing detail. I think, probably one of the most challenging applications is streaming video through an unmanaged network like cellular. Cellular is far more unreliable than… Of course managed network is great: a fiber, satellite is pretty good, public internet has its moments. I think you got a little bit of a bottleneck on your side, but then take it into the public cellular network is particularly challenging. I think Haivision is looking to implement this to take their game to yet another level.

Matt Hughes (20:25):

Yeah. I mean, my video’s a prime example. But part of that is the issue I haven’t had a haircut in a few months.

Jim Jachetta (20:33):

Some of the barbershops are open here. I got a haircut on Sunday. I went shorter… I went real short because I don’t know when the next time I can get one.

Matt Hughes (20:43):

True. I got to do that. So, that’s LCVEC. The next one I want to talk about is VC-6 SMPTE ST-2117. This is a different codec. This is a full codec. This is intra-frame codec. This is used in the contribution market. This is used in the professional video market. It’s available on a encoder and decoder pair that we manufacture. We have a package software which is called P.Link that sits on the server. That server, I’ll get more information about through the presentation. VC-6 is very similar to what we were talking about LCEVC. However, the thumbnail, the smaller resolutions is our codec, is VC-6. Instead of using a different base codec, we’re using a full stack codec, which is ours.

Matt Hughes (21:45):

It’s hierarchical so we have a multi-scale image representation. We take output upscale, add the residuals back for each layer and then create a perfect reconstruction at the other end. As I show this, there’s a few ways that this works, so you have inbuilt proxies, so I can get high quality proxies inbuilt in with each image, single multi-scale archives, efficiency increases with the resolution, and you have benefits of AI image analysis, HD cut-outs from UHD, so you have different application efficiencies within this. If I go a bit further into this… how does this compare to [crosstalk 00:22:32]-

Jim Jachetta (22:33):

Let me ask you a question there, Matt. Go one more forward one. Because you’re taking the base codec, looking at the original, looking at how well the base codec is doing and continually analyzing, and looking at the differences, scaling up, scaling down, that your technology actually learns. Take like [crosstalk 00:23:01] for example, painting over blades of grass, there are certain challenges. I know in sports, here in the US, it’s always like the orange basketball over a sea of faces or the baseball leaving the bat going over a sea of faces that, your system, through use, through trial and error, through looking at the original, looking at the end result, it’s constantly improving. Is that correct?

Matt Hughes (23:31):

That’s cracked up to a point. We’ve made the improvements on the version that’s available today, so it’s not learning as you use it more.

Jim Jachetta (23:38):

It’s already learned what it needs to learn.

Matt Hughes (23:41):

It’s already learned what it needs to learn. We can make improvements on that. But what the beauty of it is, is the way that it’s learned, it’s created efficiencies on what it’s learned. What it’s done in the past is we’ve taken AI information from scenes and implemented that into the codec. So that codec has implemented that information that it already has and then replicates that on a basis of what is seen in video.

Jim Jachetta (24:16):

Well, I think you’ve told me this before, so like say, the sharp edge or a sharp transition in an image, that can be challenging for a codec. There could be aliasing, overshoot, undershoot, some ringing even, that your AI then knows how to handle that type of edge. It’s learned from… that that edge is in its library now, right?

Matt Hughes (24:41):

Lawrence’s pinging me right now. It’s trained in advance more than learned. It’s trained to handle it rather than [crosstalk 00:24:49]-

Jim Jachetta (24:48):

It’s been trained how to handle that edge, so then it takes less resources every time that challenging element in the picture comes up. The knowledge of how to handle that detail it’s in its database so that’s what makes it so effective.

Matt Hughes (25:06):

Exactly, yeah.

Jim Jachetta (25:07):

Got you.

Matt Hughes (25:08):

If we run through these and I’m conscious of time. VC-6 versus JPEG 2000, how does this differ from JPEG 2000, which is very common in the marketplace? Both are intra. A P.link is an intra-only video compression solution, but the costs for JPEG 2000 for UHD is massive. The bandwidth cost is massive for JPEG 2000. Network manufacturers can supply individual modules for this compression. You can migrate workflows from HD to UHD results in four times increase in bandwidth. This is really important when we start getting into the compression ratio of VC-6 versus JPEG 2000. So provides the significant CAPX and OPEX savings. When it’s boxed into our solution, it can operate as an end-to-end solution. I’m going to go through a few of these. It can operate within existing network provisioning systems. If somebody is already using Net Insight, Media Links or Navien, this can still be used within that network. We’re not offering that network provisioning system, we’re offering an input and output, an encoder and decoder for this.

Matt Hughes (26:32):

What is P.Link? We can take fully flexible bidirectional links in any direction. 1 to 8 cameras so 1 to 8 video feeds in compressed and then do an output. That compression is VC-6. It’s flexible. We have what’s called a dynamic multiplexer, which is quite interesting. Dynamic multiplexer is very interesting for us is because it’s similar to StatBox. However, we’re doing this on a frame by frame basis in real-time.

Matt Hughes (27:07):

Instead of looking at statmax, which inherently is statistical in the past, we’re doing everything in real-time as it happens. When you have a statmax and you’re able to statmax multiple channels together, in our instance, we’re doing this in real-time, not on data that’s already happened a while ago. As each frame is being measured by the system, we’re able to adjust the amount of bitrate that need to be used for each channel.

Matt Hughes (27:41):

Also, another way that we can save on bandwidth is, what I’ll also go a little bit more into, is active bitrate control. Then we also have what’s called a full frame mode or the way that we’re rendering the interlace content for further improvement.

Matt Hughes (27:57):

Active bitrate control. Let’s say you’re doing a live production, which you have eight camera feeds. Those eight camera feeds are going back to base. You’re taking the eight camera feeds. You’ve got six in proxy here. Those six are being fed to a multiviewer and your producer is calling the shots. Those six cameras that are set in proxy are only using 10 megabits per second. When you take one of those cameras and move it to preview, it will automatically bump that up to 60 megabits per second. You’re increasing the bitrate frame accurately. This is happening in real-time, frame accurately, you’re increasing a 10 megabit feed to 60 megabits dead on the frame. Then if you want to go to program, you can change 60 then to 80 on that frame as well the next frame.

Matt Hughes (28:55):

We’re able to say, I’ve got an eight camera feed. I’m using a lot less bandwidth. I’m using 80 megabits per second for all eight of those feeds. I can be very creative of how I use those. Granted, you’re not going to be able to take ISO cameras and be able to say, look I’m going to feed those for recording if it’s a wide shot or whatever, granted you won’t be able to take those 10 megabits per second as a proxy feed. But if you’re doing a pure live production where nothing else is being recorded and you’re just taking to preview and on air, then this is one more [crosstalk 00:29:30]-

Jim Jachetta (29:30):

In this scenario, you could record the full resolution at the venue side, if you wanted to-

Matt Hughes (29:37):

Exactly.

Jim Jachetta (29:38):

… be another option.

Matt Hughes (29:40):

Yeah. Exactly.

Jim Jachetta (29:41):

But this is a great technique of prioritizing the feed that… You’re maximizing the bandwidth so the director can get a good view of the preview, what he’s about to take. And then of course, the program feed is the top priority using the majority of the bandwidth.

Matt Hughes (29:59):

Exactly. Very efficient. It’s quite a cool feature we have. What I said before, bidirectional feed. I can say I want four inputs and four outputs; four decode, four encode, or I can have six and two, or eight and one. Oh sorry, seven and one, doing the maths. Seven and one. I’ve got seven cameras heading back to base then I take my dirty feed that has all my graphics and everything else from the production control room or from the production room [crosstalk 00:30:31]-

Jim Jachetta (30:34):

In sports, the onsite commentators need to see the program feed coming from the studio. There might be commentators in the studios, say, “So we throw it back to you, Chuck, so he can see what’s happening.” They can see the graphics. This is great. I see the uses for this.

Matt Hughes (30:52):

Don’t forget this is low-latency. We’re not working with the gap structure that you’re working with in the OTT world. You’re seeing this within hundreds of milliseconds back and forth.

Jim Jachetta (31:02):

Yeah. All Intra-frame. Typically, this is done over a managed network, but you can do it over an unmanaged network, like the public internet as well. Correct?

Matt Hughes (31:11):

You can, but you need to have something else in there. You need to have something like Zixi. If you’re doing this over public internet, you’ve got to make sure that you’ve got that connection. You don’t have the flexibility like you would have with the gap structure that instance. You do have to have a fiber link between two areas. Going over public internet is dangerous, we know that. The proof of it is today [crosstalk 00:31:36].

Jim Jachetta (31:38):

With unmanaged, then you need some buffering like the bonded cellular. We typically need a second or more or eight tenths of the second plus it’s a different workflow, so then commentators and the talent have to adjust when they’re talking not to step over each other. But then in those instances where you have a fiber or a good telco connection, getting that latency down to a couple hundred milliseconds, this is the way to go. Correct?

Matt Hughes (32:08):

Yeah. And I’ll go through some more examples further along. So just running through this real quickly. High performance intra contribution, flexible software. We’ve got a web based GUI, NMS integration with other companies. The appliance is robust. We’ve got hot swappable fans. A fan dies, you can pull it out while it’s still running. The fan will kick the other fans take over. There’ll be spinning at the same time, go and get your replacement fan, put that in. All of these are hot swappable, no moving parts within it, so minimize any risk that you have with the one solution. You can deploy an existing workflows, like we said before, working with Nevian, working with Net Insight, working with Media Links, whatever you’re already working with, and you can support all these different workflows. Is it a workflow where you’re doing production one, remote production; remys one day, you’re doing a live event the next day? If you’re doing UHD and HD, you could have a mixture. You can choose one UHD and four HD services throughout at the same time. It’s very, very flexible.

Matt Hughes (33:24):

The applications. Like I said before, live event, in Europe we call them OB and event productions. But these here are live trucks. It’s on a live truck where it’s doing a UHD back to the studio, back to your video router. A single unit can transport a mix is what I said and depending on the workflow, the truck can be doing one UHD one day, can be doing HD the next day, however you want to manage it.

Matt Hughes (33:54):

Site-to-site links. What we did was Sky Italia, is they had a link between Sky Italia and their telco provider. They wanted to take all their video feeds and feed them back and forth. You had two different studios where those two were linked. Another site-to-site link that we just completed last week is, we’re doing all the video links for the World Health Organization, Geneva. They have a link between the United Nations and the World Health Organization between the two. So if you see Tedros doing his talk in the main theater that they have, that’s running through a P.Link. They also have links that they’re setting up this week with Copenhagen. They’ve got an office in Copenhagen. They’ve got seven regional offices that they’ll be setting the P.Links up for site-to-site links.

Matt Hughes (34:52):

One of the problems that they had before they went with this, is they had problems with latency. They had problems with getting quality video, just to be able to see charts and information that are being sent from site-to-site and being able to have a very clear picture. There was one thing this really helped them with, that they had these links, fiber links between all the organizations, so it worked out very well for them.

Jim Jachetta (35:23):

Well, in this example, like Sky TV, those are links that are up 24/7 carrying critical programming, so having an enterprise grade chassis, like you said, with redundant power supplies, hot swappable fans, hot swappable power supplies, that’s critical. They can’t afford any downtime on this.

Matt Hughes (35:47):

Yeah. And you’ve had them up there running in your office. You’ve seen, they’re pretty robust.

Jim Jachetta (35:53):

Yeah. I know. It’s a beautiful piece of hardware.

Matt Hughes (35:56):

Yup. Remote production. This is doing remote events, concerts, live sports, same thing back to the studio. We can do 6HD, two return videos, just as an example. The next slide that I’m going into is, I want to talk about our connection with AWS. We recently did a proof of concept and a test with AWS because one of our clients asked us to do this before they would implement them. The beauty of AWS, massive organization, they’ve got lot of worldwide connections and those connections can range from basic internet connections to their direct connect which they have directly into their cloud. We said, “We’ll take advantage of this and what we can do is use P.Link in this scenario.”

Matt Hughes (36:54):

What we did is we took a P.Link with AWS and Zixi, and this goes into your standard ISP. On one end, we did a UK to Australia and return feed. Along that feed, this is where you have your internet, okay, it’s a one gig link internet, but we also had to use Zixi in that instance, to provide packet protection, to provide anything that’s lost in the video signal, when it goes from our V-Nova office to their London data center, locally. This is your last mile, two miles, half mile, whatever it is. We also then took to test customer a customer office… This is customer office number one was actually AWS’ Amazon office in London, and they have a direct connect which goes directly into their data center, into their cloud.

Matt Hughes (37:48):

We then took the feed to a Sydney data center, on the other side of the planet then back to the London data center, back through the Zixi local internet with one gig on BT line, it’s a local British telecom line, to our office and then AWS MediaConnect to compare these as well on each end. Then we had an ISP fiber to do a test, to see what that latency was and see how that worked. We did UHD feeds and I’ll show you some of the data that we got from this, but it was very low-latency. I think we were around the 200 something mark latency, but it was very low-latency to do that complete… to do a one way and back direction.

Matt Hughes (38:41):

The benefits, that AWS has connections throughout the world, it’s a leased space. You’re always paying for space all the time. Media companies can piggyback on available infrastructure and AWS guarantees the network. That’s MediaConnect. There’s two products they have, one is the MediaConnect, which we also use with Zixi. Zixi is part of that MediaConnect package. We have direct connect, which is the direct link. We also connect directly to the AWS network through local data centers. Last mile dedicated comes in 1 gig to 10 gig. This is important here because when I start showing numbers about what you can fit on a 1 gig and 10 gig line, it’s very important. AWS and P.Link are pretty much the only companies that can do this, because we were down into the lower bitrates or we could actually move this into lower. We can offer multiple services. When I start showing the bitrates and they [crosstalk 00:39:49]-

Jim Jachetta (39:49):

But while maintaining a visually lossless experience.

Matt Hughes (39:59):

True, the thing is with [crosstalk 00:40:01]-

Jim Jachetta (40:01):

Without sacrificing quality for bitrate.

Matt Hughes (40:03):

Yeah, exactly. But using something like JPEG XS or Uncompressed or JPEG 2000, you wouldn’t be able to use this necessarily with this infrastructure that’s available. Ultra low-latency, hourly costs for ingest, if you’re doing something that’s event-based, very important and then low cost per gigabit for the output. Zixi integration over that public internet which we were talking about before, it’s packaged, they’ve already packaged that for you, so you don’t have a separate bill from Zixi or at least you don’t today, you don’t have a separate bill from Zixi and AWS, it’s all packaged.

Jim Jachetta (40:43):

Yeah. Right.

Matt Hughes (40:46):

From what we saw, it’s a less expensive alternative to do it this way than just doing Zixi on its own. It’s compliant with 2022-7 Seamless Package Protection. It covers a packet loss over the last mile. This is where it’s really important where you need to say, “Okay, we can’t just do this over public internet.” It’s that last mile that matters, on either end. You’ve got to make sure you’ve got that packet loss protection in there. You’re able to measure this and monitor it, goes through the local data center instead of that public internet to the destination, and there’s no additional charge for AWS for ingest, which is really important. That could be one thing that would be very expensive. So how much does it cost? This is completely from their website. This is what we found, we [inaudible 00:41:43]-

Jim Jachetta (41:43):

Prices may change.

Matt Hughes (41:45):

Prices may change there’s a disclaimer here.

Jim Jachetta (41:47):

Your mileage may differ in the real world.

Matt Hughes (41:50):

Exactly. They’re available on their public website. You can plug in these numbers yourself. You can have a look at them yourself, try out your numbers. This is what we did. We made a little spreadsheet here. We did a five hour 60 megabits per second HD stream using Zix, as well to the AWS data center and direct connect on the decoder. Granted, the direct connect connection was already there, we didn’t pay for that connection [crosstalk 00:42:19]-

Jim Jachetta (42:22):

Right. That needs to be a little asterix there.

Matt Hughes (42:26):

That’s a AWS question. They [inaudible 00:42:30].

Jim Jachetta (42:31):

Right. You probably need to be doing enough volume to warrant a direct connect.

Matt Hughes (42:39):

Yeah. It’s possible. And the price for this, we paid 10 bucks 89, for five hours of HD content for live production. That’s nothing.

Jim Jachetta (42:53):

I think I crunched that number too. It came out to like less than 20 grand, if you run it 24/7, a yearly cost. But like you said earlier, that assumes you have the direct connect connection. If you didn’t have that, the cost probably would be different or much more.

Matt Hughes (43:13):

It depends. You can still go to the Zixi MediaConnect solution, I don’t know. Those things all need to be tested out. [inaudible 00:43:25] say to anyone, “Oh, just do this.” It needs to be tested and that’s why we have equipment available. This can be set up, you need to test this and make sure that this is right for your organization as well.

Jim Jachetta (43:38):

I should add, VidOvation is an AWS partner. We use AWS everyday with V-Nova, with Haivision. V-Nova is a AWS partner. We can help you sort out these design and configuration details for you. If you have a good connection to the public internet at your two facilities, may not need AWS. AWS is usually good when you want to scale, when you want to go one to many. But we don’t have to go through AWS necessarily, but we’ll help you make those decisions for your particular use case. Correct?

Matt Hughes (44:27):

Yeah. We’re not limited to AWS either. I mean, we need to go through this and it needs to be planned out, especially if you’ve got important video that you’re feeding, you want to make sure that it’s right and what your connections are. We do go through that very, very much in detail.

Matt Hughes (44:44):

Post production. Another thing that this can be used for, color correction. My neighbor, next door, he does the VFX. He works here in London. We have Shepperton Studios up the road. We’ve got situations where you may have a director or producer that’s based in Hollywood, they want to make sure the scene… and it’s real time they want to see color correction real-time. All of that stuff can happen real time and you could use P.Link for that as well. You can take the UHD feed from one place to another and site-to-site, low-latency: “Does this look right? Is aesthetic correct?” This is another set-up for the post production market that we’ve used these for.

Matt Hughes (45:32):

What are the rates? The compression ratio. We’ve got different compression ratios. HD streams, how much bandwidth do we use? We’re looking around 75 megabits per second. [inaudible 00:45:46] could be less than that in some instances. These are all approximations. Uncompressed would use 1.5 gigabits, JPEG XS would use 400 megabits, JPEG 2000 would be somewhere around 150. UHD streams were 130 megabits per second, Uncompressed 12 gig. JPEG XS 4 gig. JPEG 2000 1.2 gig. You start looking at that one gig line from AWS, the choice is clear what you would want to go with.

Jim Jachetta (46:21):

I see a lot of zeroes in those columns. It doesn’t fit in the pipe.

Matt Hughes (46:26):

It doesn’t fit the pipe. If you go to like typical latencies, we’re around 100, give or take depends on the way the wind’s blowing. We’re not going to say that we’re perfect with typical latency. We’re somewhere around where JPEG 2000 is, that’s granted. Uncompressed, JPEG XS are low, two milliseconds. [crosstalk 00:46:47]-

Jim Jachetta (46:49):

There’s so much higher bitrate that’s part of the trade off.

Matt Hughes (46:52):

Exactly, it’s a trade off.

Jim Jachetta (46:56):

If you don’t do a lot of compression, you just sending the uncompressed packets through, of course that’s going to be pretty low.

Matt Hughes (47:05):

Exactly. We know that and that’s not why you would want compression anyway. You want compression to make savings on your bandwidth and also keep a good quality, so that’s where that comes in. There’s got to be latency in there of some sort, and that’s where it happens. We’re not in the gap structure area where you have many seconds. We’re in the milliseconds, which is hundreds of milliseconds. Your pipe sizes, what can you do with 2160 p60 and 10 gigs pipe? 76 here, zero here, two here, eight with JPEG 2000. We start getting into 1080, I60 on a 10 gig pipe, you’re up to 166 feet. There’s a big difference of what it looks like.

Matt Hughes (48:04):

In summary, it’s flexible, high quality, can reduce your bandwidth costs especially for UHD up to 80%. It’s used by some of the biggest names in the business. We’ve got a large racing circuit in Europe that’s currently using these. They just took some deliver. I can’t say their name today, but they’re just using these for the new racing season that starts in three or four weeks. It is a new technology, granted. We’ve got units that are available in… Jim’s got units that he can get to you, if you guys want to test anything. We’ve got units in the UK as well so we’re covered globally. But if you do want to test these, let us know. P.Link is… you have to test it. There’s no way around it. You have to see it, and use it, and feel it, and with everything going on [crosstalk 00:49:05]-

Jim Jachetta (49:06):

A great test would be to send a video from customers MasterControl to our lab here in Southern California and then out to the UK and back. Kind of do a round trip around the globe. Whatever you want, or if you have a New York and LA facilities you want to connect together, we welcome a demo. I think you guys will be amazed. So you go back to that chart where you can’t get the JPEG 2000 to fit through the pipe. Some customers will say, “Oh, well. I’m not ready for 4K.” So as you said, Matt, the numbers look really, really good. The compression is amazing. The higher the resolution, the more efficient your compression is. The 4K savings is a no brainer, but even a 50%, 30% savings for HD, you can fit that many more videos in the pipe as that chart showed and bandwidth is not free, right.

Matt Hughes (50:20):

That’s the key.

Jim Jachetta (50:23):

And being able to push more content. Most of the questions I’ve gotten Matt, are about, how do I get the PowerPoint deck. Please send me a copy of the final deck. What we do is, we’ll post the recording of this session in our blog or you go under the resources and then webinar section of our website or the blog, the recording will be there, you’ll be able to download the slides. We even transcribe our conversation here, if you’d like to read what we talked about today. Does anybody have any questions? Seems like-

Matt Hughes (51:11):

Lawrence must have a question for me. He likes to quiz me on the spot.

Jim Jachetta (51:15):

He does. Let me see. I don’t see anything from Lawrence. There were few people that want to download the presentation. So Matt, thank you so [crosstalk 00:51:31]-

Matt Hughes (51:33):

Harry’s one of my guys, is from this side and worth mentioning, LCEVC doesn’t need upgraded chips, will fit onto any of today’s kit, which we mentioned earlier as well with LCEVC working on [crosstalk 00:51:47]-

Jim Jachetta (51:46):

Right. You don’t need a chip because it’s software base. Right. It works around your existing chip set.

Matt Hughes (51:53):

Yeah. He wasn’t listening.

Jim Jachetta (51:55):

Got you. No, that’s an important point. And we encourage everyone, anyone to reach out to VidOvation. Do you have like a contact slide? I have a slide up.

Matt Hughes (52:09):

Right here. Just put it up.

Jim Jachetta (52:15):

You got that? No?

Matt Hughes (52:18):

You’ve got it. Can you see it?

Jim Jachetta (52:21):

No, it’s still… Oh, there we go. Yeah. Perfect. We encourage you to reach out to VidOvation. Of course, you can reach out to my colleagues. You can reach out to Matt or V-Nova directly. We’re a close partner with V-Nova. We’re distributing, supporting the product here in the US. We’re going to be your first line of support. As Matt mentioned, we have quite a few systems in our inventory ready for demo. We encourage you to take a look at this technology. I know some of our customers will put it through their IP codec analyzing equipment. A common thing I’ve seen with this tech, is people think they have the input and the output swapped on their analyzer. What the quality can’t be that good. That’s not possible. So it must be some glitch.

Jim Jachetta (53:14):

No, it really is that great of a codec. It is visually lossless. It’s really something you should try. Matt, I thank you for today. I thank you for laying some knowledge on us today. I hope everyone out there is staying safe and healthy. I was looking forward to seeing some people at IBC this year spending some time in the V-Nova booth that of course is not happening. NAB New York is not happening. At VidOvation, we use GoToWebinar, we use GoToMeeting, we can do an engineering discovery call where we get Matt, some of his engineers, VidOvation, some of our engineers on the call and collectively we’ll design a system that will solve any hiccups you have in your workflow, solve any of your challenges. And we look forward to hearing from you folks soon.

Matt Hughes (54:21):

Yup. Same here. And thank you everyone. And as Jim said, soon as I can get on a flight back to the US, my home, it’d be great, but that’s not going to happen. I don’t think anytime too soon, but we don’t know yet.

Jim Jachetta (54:36):

But we’ll let technology in the interim. We have product here. You can get support electronically. We can do GoToMeetings. We’re here to service you. Thank you so much. Thanks again, Matt. Thank your colleagues for putting together this presentation. We try to have the recording and the transcription and the PowerPoint online within a couple of days, at the latest, you should see it by early next week. So thanks everyone. Have a good rest of your week. Thanks for tuning in. And we hope to hear from you soon. Thank you, Matt.

Matt Hughes (55:19):

Thanks everyone. Thank you.

Jim Jachetta (55:20):

Take care. Bye-bye.

Continue Reading