HCAM Next generation of flexible, portable, HEVC 4K UHD wireless systems from Vislink
FocalPoint is Vislink’s latest wireless camera control system providing comprehensive control functions for today’s broadcast needs.
VISLINK HCAM HEVC/UHD Wireless Camera System with Camera Control Training Session
Wednesday, June 1st at 9 AM PT / 12 Noon ET
HCAM Next generation of flexible, portable, HEVC 4K UHD wireless systems from Vislink.
- HEVC/UHD Very Low Latency Encoder
- Interchangeable, future-proof dual SFP modules supporting Quad 3/6/12G SDI/HDMI/Fiber Optic/SMPTE 2022-6 HD-SDI over IP interfaces
- Integrated Camera Control with FocalPoint Compatibility
- Direct-docking V-Lock & Anton Bauer battery plates with integral power feed through
- Wi-Fi & Bluetooth control via dedicated Android & iOS Application
- ETSI compliant with proposed EN302-064 V2.1.1 regulations
FocalPoint Camera Control System
- Allows control of up to six cameras by multiplexing six control panels over a single RF link
- Supports one-way or return data communication to the camera
- Three CRIU units can be linked together to allow control of 18 cameras using one RF head
- 2.2” Colour LCD with an intuitive menu for easy configuration
- Front panel LED shows data activity/system health diagnostic information
- Compact UHF telemetry transmitter unit with copper and fiber connectivity
- Unit configurable via the front panel or by a web browser
Jim Jachetta (00:01):
Good morning, everyone. I’m here again with Dave Edwards, our VIP guest. This is his third appearance. I’m Jim Jachetta CTO and co-founder of Innovation. And David’s here for third time and hopefully many more. David, you’re a product manager right? At Vislink?
David Edwards (00:18):
Product manager Vislink that’s correct. And so I’m here to talk about the Vislink HCAM product and the associated ecosystem around that today. So hopefully this will be useful to everyone on the call and everyone who watches the recording after the event. So what I really want to start off with was with the HCAM product, it’s a transmitter product for cameras.
David Edwards (00:48):
It enables you to get close up and personal and give you immersive images that you get from wireless camera transmitter. It gives you that freedom and movement to get you right at the heart of the action. And I wanted to start today about how using wireless cameras and how TV can really capture people’s imaginations and change things for how people perceive video viewing.
David Edwards (01:17):
As an example, we’re almost 70 years to the day of the first major international TV broadcast. This was of Queen Elizabeth’s second Jubilee in London, 70 years ago. And as a major broadcast, that was so important to people. That captivated people so much that people dressed up as TV screens. Became an event.
David Edwards (01:40):
It became a TV event and it became something that changed what people do and TV really can do that. So much so that when TV changed from black and white to color, the number of people dreaming in black and white dropped from 25% to 7%. It affected how we dream. Similarly, perhaps you didn’t know, it’s been scientifically proven that octopi prefer HDTV to SDTV. It’s not just humans.
Jim Jachetta (02:12):
How do you test that, David? You put two screens side by side and they’re they’re drawn-
David Edwards (02:16):
Yeah. So this was two screens at either end of a big tank. And on the TV screen, the researchers presented a picture of a female octopus to the male octopus in the tank and the octopi, the octopus, it swam towards the picture of the octopus on the HD screen. Preferred the HD screen. It was more realistic to the male octopus. So image quality can make a difference.
Jim Jachetta (02:53):
David Edwards (02:55):
And so what we’re saying about wireless cameras is that they do get you to the heart of the action. They give you that immersive point of view. And that’s why we’re talking about Jubilees. We’re talking about octopi. We’re talking about how we dream. But video is also important to how people monetize content to how people monetize events after the event.
David Edwards (03:21):
And some recent research shows that in terms of putting content on social media posts, a 30% increase on video content on social media post increases the engagement on that social media by over a hundred percent. So it’s not just during a live event where you may choose to use video. It’s not just during a live event where you may choose to use video captured by wireless camera.
David Edwards (03:45):
Those sort of immersive views. Those point of view views that you get from wireless cameras are the sort of pieces of content that play out really, really well on social media. So if you are trying to engage your audience beyond that live event of major sporting broadcast or something like that, then it may well be those wireless camera views that you turn to to captivate your views and to keep the engagement with your sports fans going for longer.
David Edwards (04:16):
So HCAM, what is it? Well it’s often billed as a 4k device. As the device to give you 4k imaging off cameras and of course, we know that 4k delivers you that premium quality picture and that’s important. But it’s not the only thing that’s important. So size is important to 4k TV because the impact that you get from 4k, that impact, that visual impact, it rolls off with the distance that you are from the TV screen.
David Edwards (04:51):
So if you are going to enjoy the visual impact of 4k, you either need to have a large screen and many 4k TV screens are large, or you need to be fairly close to it. You need to be three times the screen height away in order to perceive the resolution improvements that you get from 4k screens. And so people are considering that. People are considering the costs of implementing 4k, maybe 4k to the home.
David Edwards (05:22):
They may still choose to capture in 4k, but there are some services that are available in 4k to the home and some that are not. But one of the things people are looking at is high dynamic range. So here’s a little quiz for you. Maybe you can use the text chat that you’ve got. You can launch that window and I’ll ask you, out these two images, which is sharper?
David Edwards (05:46):
The image on the left or the image on the right? Make your choice. So have a look at that and see what you think. I’ll give you another image. So this image, picture of a penguin. Which image is sharper? The image on the left, or the image on the right? Look at that, make your choice and decide. There is a difference.
Jim Jachetta (06:13):
Like left, left, right. And let’s see a right.
David Edwards (06:23):
So I’ll give you those two ones. So I’ve reordered things a bit here, but what you’re seeing there, I think most people have chosen the image on the left for the landscape shot and for the penguin, most people seem to have chosen the image on the right. And the difference there is purely contrast. The only difference there.
David Edwards (06:48):
So you are perceiving those images as sharper because there is higher contrast to those images. And that’s one of the effects that is noticeably different about high dynamic range. Because what high dynamic range does is it gives you more levels of luminance and by having more levels of luminance, you can have greater luminance separation between each of those levels of gray.
David Edwards (07:16):
And that gives you the ability to have, by having greater separations between the levels of gray that can be presented to you on a HDR screen, that’s perceived as a sharper image that you get.
Jim Jachetta (07:29):
Isn’t there research David too, that the human eye has more luminance, dynamic range than… More luminance resolution or sensitivity than color, right?
David Edwards (07:43):
That’s right. And so what HDR attempts to do is to give a luminance range that is much closer to the simultaneous human vision than SDR video. Than the traditional SDR video. It’s not the same range as your long time vision, which can go, we know, from very dark up to near blindness. But in terms of what your eye can see at one time, HDR video is much close to what the human eye can perceive.
David Edwards (08:17):
So it’s giving you a more realistic view to the human visual system. And so that ability to give those more levels of gray, to give what looks, what appears to be more contrast to the image gives you what appears to be a sharper image. And so, although people may choose to capture images in 4K HDR, they may choose to output that content on to consumer channels in other formats, such as HDHDR.
David Edwards (08:48):
So in terms of HDR production flows, you may capture and produce in certain formats. Sony Slog is one of those formats. There are various HDR formats for delivery to the home. Hybrid log gamma, which we have here, is one of them. Other systems such as Dolby’s system is much better for produced content.
David Edwards (09:16):
We were talking about The Crown as a Netflix serial a few months before we started. For that produced content, you may choose to use the Dolby format because it gives you much greater flexibility, but the Dolby format and enables you to carry that HDR data as metadata and managing metadata workflows in a live environment is challenging. And so a simpler mechanism has been chosen for live workflows. So that’s HLG as one of the formats for delivery of live content.
David Edwards (09:55):
So in terms of managing those HDR workflows within the Vislink HCAM environment, within the production chain, then you can signal your preferred HDR formats. Whether it’s Slog, whether it’s HRG, whether it’s HGR 10. Through the production flow, you don’t want to be, have your camera set up mistakenly, maybe in HDR 10 and the whole chain thinking that it’s HRG, that won’t end well.
David Edwards (10:22):
And so the HCAM can read the HDR format from the camera and signal that through the production chain. So it minimizes the chances of a mistake in terms of how you configure the system. So, as we said, HDR delivers an impact at all viewing distances and that’s really great. 4k, that’s an extra level again. But obviously 4k requires more data and some production workflows may be able to justify that 4k workflow, those 4k high data rates.
David Edwards (11:01):
Maybe if the content is a major event, maybe like the Super Bowl or an Olympics, you can justify that if you are outputting onto a pay TV channel in whatever flavor things come in these days, you may be able to justify and monetize a 4k delivery chain.
David Edwards (11:20):
Other channels may be… A national broadcaster may not be able to afford a 4k production flow, may not have the bitrate to support a 4k production flow to the end user. But HDR, whether it’s applied to HD or whether it’s applied to 4k, is a really good choice. However you are doing your resolution.
David Edwards (11:45):
So Jim, you talked a moment ago about what the eye can see in terms of color. And I want to talk that for a moment. So we know that good image quality at source can give you a good image quality at home. And we’ve certainly seen, if you go back and watch old content from 10 years ago, maybe 20 years ago, you can clearly see how production image quality has changed over time and how that relates to how you can perceive it on TV.
David Edwards (12:17):
Now that our TV chains, our TVE screens are much better. And in terms of the live production workflow and distribution, that may go through a number of stages from the OB company, the content distributor, maybe a regional broadcaster may be involved. The consumer platform that you are watching on. All of those stages go through many encode and decode processes. And we have to consider how the human visual system relates to that.
David Edwards (12:49):
So I don’t know whether people want to have a little look at this image. One of the things that may surprise you about this image is that it’s black and white image. So here we have an optical illusion, the color simulation grid optical illusion and actually what you are seeing there is a highish resolution black and white image with a very low resolution color grid applied to it. And to the human eye, I go back a moment, that gives the impression that what you’re seeing there is a full color image and it is not.
Jim Jachetta (13:27):
David Edwards (13:30):
And so within broadcast, we use the property that the human eye is not so sensitive in terms of resolution in color to save some of the data rates. So some systems, certainly the standard direct home broadcasts are done in 4:2:0 video format, where there is much less color information sent compared to the luminance.
David Edwards (13:56):
But you may not want to do that throughout the whole broadcast chain, because you can apply more color resolution to your image within various broadcasting profiles. 4:2:2 is the most commonly used one. And so on the properties of that, as you go through multiple encode decode stages through the broadcast chain, then 4:2:2 video suffers much less in terms of color bleeding across color edges.
David Edwards (14:23):
So I’ve simulated that with these two sets of images there. And so color bleeding can reduce the perceived resolution of that image. Makes a big difference if you’re talking about superimposing on screen graphics. So as well, those sort of hard, high color images that you may put onto your broadcasts.
David Edwards (14:45):
They look much better if you’ve taken your source content and process it as 4:2:2 video. And so the Vislink HCAM supports 4:2:2 video, because it is involved in transmitting that raw camera content. So it’s right at the front of the broadcast chain. So highest, premium quality video really matters with Vislink HCAM applications. So we choose 4:2:2 as the preferred format for Vislink HCAM.
David Edwards (15:19):
And just flipping back to luminance for a moment. So in terms of quantization, whether you’re talking about eight bit quantized Luma or 10 bit quantized Luma. Here on the left, what you’re seeing is an eight bit version of the image, and you can see hopefully the color band or the luminance banding there. The posterization effect that you get in those gradual changes in luminance on that sunset on the left hand side of the image and on the right hand side with 10 bit Luma, 10 bit quantization of the brightness, you are not seeing that banding.
David Edwards (16:00):
And that’s particularly important if you are doing high dynamic range, because if you have a non-linear relationship of your Luma input brightness to output brightness through the production chain, then it’s important that you have enough quantization bits. Enough resolution there to support the high dynamic range, non-linear curve.
David Edwards (16:26):
So for that reason, also because the Vislink HCAM may be used in terms of high dynamic range. Broadcast is important to that. Product supports 10 bit Luma quantization. So I’m going to change tack entirely now and talk about the transmission technology. So we talked about the video compression that is half of a wireless camera transmitter.
David Edwards (16:52):
And I’m now going to talk about the transmission technology. So Vislink HCAM, it uses OFDM based transmission technology. Most of our customers choose to use it in LMS-T mode. This is the Vislink system that we have and it is designed to work within a reflective environment.
David Edwards (17:18):
And you can imagine where you are using this maybe for broadcast within a sports stadium or maybe within other built up environments, then you are going to get reflections of hard objects. And you’re going to create places where there is fading. You’ll have places where reflections combined to reinforce the signal and give you a high signal strength and you’ll have other positions, maybe just a few, a foot away where you have fading effects and where you get cancellation out of those signals.
David Edwards (17:53):
And so LMST is designed to be robust to that fading characteristic of transmissions within an urban environment. So when we are transmitting, we want to make best use optimum efficiency of the bandwidth allocation that is allocated to the transmission. We all know that bandwidth is a scarce commodity.
David Edwards (18:19):
And so the LMS-T transmission mode aims to fully occupy the allocated bandwidth. So typically bandwidths for wireless cameras are allocated in 10 megahertz channels. Four 10 megahertz. Some people consider this to be eight megahertz with a guide band, but we can, within LMS-T, fully occupy that 10 megahertz channel. And that gives us by…
David Edwards (18:50):
For the occupying, that gives us the capability to maximize the bitrate that is transmitted over that spectrum. And it’s not just making full use of the spectrum. It’s making best use of the bits available. So LMS-T uses low density parity check codes as an FEC. So that’s a more efficient FEC compared to some of the older transmission schemes, such as DVB-T. So we get good error protection for a smaller number of bits dedicated to the error correction. That means that we maximize the data rate for the real video payload.
David Edwards (19:32):
So here’s some examples of how that all combines together. So LMS-T is about 20% more efficient in terms of how it uses bandwidth, the allocated bandwidth and it’s about 30% more efficient in terms of how it uses those data bits towards the data payload. So in total, that gives you about a 50% gain compared to DVB-T for the same bandwidth allocation.
David Edwards (20:03):
So more megabits for your video gives you better picture qualities and that’s what the game is all about. There are ways that you can get more megabits with allocation again and that’s with dual pedestal transmission. So in this case, you might combine two 10 megahertz channels and you combine them very, very closely together, which enables you to effectively, fully occupy 20 megahertz a space. And that gives you twice the data rate. And that’s what some of our customers use for top quality 4k transmissions of the wireless camera.
Jim Jachetta (20:48):
So you feed two carriers in a given band where other systems only use one carrier. So you double the throughput?
David Edwards (20:58):
So we actually use two adjacent bandwidth allocations and that enables us to double the bitrate through-
Jim Jachetta (21:06):
Oh, okay. So you use two adjacent bands. Okay. Got it. Got it.
David Edwards (21:10):
Because these bands are often allocated to you in 10 megahertz channels. So you don’t normally have the luxury of, say, occupying a 12 megahertz channel or something like that. You can have 10 megahertz or you can have one 10 megahertz plus another 10 megahertz next door.
Jim Jachetta (21:28):
David Edwards (21:31):
And so it gives you the flexibility to occupy two bandwidth allocations and enabling you to get double the bitrate. And so if you’re dealing with the highest quality 4k images, which you may want to do for a major event such as the Super Bowl or an Olympics, you may choose to do that. But there’s a relationship between picture quality, bitrate and latency.
David Edwards (21:56):
There’s a nice little triangular relationship. And so what you have to consider when you choose your transmission parameters is that relationship. If you may choose, for example, to reduce the latency, the encoding latency, through an MPEG compression engine and you want to maintain the picture quality, then really your only choice to enable you to do those things.
David Edwards (22:26):
Lower latency with maintaining picture quality is to increase your bitrate. Similarly, if you want to increase your picture quality and maintain the latency, again, the only real choice to you to enable you to do that is to increase your bitrate. So there is a relationship between all of these different parameters which you may choose to juggle. So that’s a choice that you have to make when you are configuring your system.
David Edwards (22:59):
When you are designing how you’re going to use the equipment. How to balance bitrate, how to balance picture quality, how to balance encode decode latency. But there is a tool that changes that and that’s your encoding standard. So we know that HEVC or H.265 compassion is more efficient in terms of bitrate than MPEG-4. Sometimes called H.264.
David Edwards (23:26):
And so if you want to reduce your bitrate without affecting picture quality, without potentially affecting latency, then you can choose HEVC video compression. And that’s exactly what we do for 4k transmission. 4k transmission, four times the raw data rate, you need to transmit that.
David Edwards (23:49):
How do you do that efficiently within the bitrate that’s available to you within the RF bandwidth that’s available to you? HEVC is the obvious choice. It enables you to give you good quality pictures within a reasonable data rate, within a bandwidth that you have available to you. And so that’s the choice of HEVC.
David Edwards (24:10):
Also really useful for HD as well. So here’s an example between Vislink’s previous best MPEG-4 encoder and Vislink’s HCAM HEVC encoder. And this is a picture quality graph and what you’re seeing is on the X axis, you are seeing the video bitrate, the encoding bitrate against a calculated picture quality measurement on the left, on the Y access. And what we’re seeing is that with a change from MPEG-4 compression to HEVC compression, you can maintain the picture quality. You can maintain the Y access scale to about 45 DBs.
David Edwards (24:58):
You can maintain that picture quality level for the picture, but you can effectively reduce and half your bitrate that you need to give you that same picture quality. And so HEVC video compression is a really good tool to enabling you to be more efficient in your bitrate, to be more efficient in how you use the bandwidth allocations that are available to you.
David Edwards (25:22):
So that’s a good example of why HEVC is a good choice to use. It might, for example, enable you to transmit twice as many wireless camera feeds within the bandwidth that you have available to you as with an MPEG-4 system. And that might give you more creative tools. The other ways you can use this, you can change from MPEG-4 compression, an MPEG-4 wireless camera transmitter to a HEVC wireless camera transmitter, such as the Vislink HCAM that enables you to reduce your bitrate.
David Edwards (26:04):
But you can do that in such a way as you maintain your bandwidth. And if you maintain your bandwidth, you can then re juggle your modulation and FEC choices to give you a more robust transmission. And that can enable you to double the transmission range of your wireless camera transmitter. So here’s an example of that. If I was previously using an MPEG-4 H.264 encoder in LMS-T 16QAM, 10 megahertz at a bitrate of 19 megabits, I want to maintain my picture quality. I can now maintain that picture quality and do it with HEVC compression.
David Edwards (26:45):
And I can do that within nine megabits and I can dedicate all of the extra available bitrate to FEC. And that enables me to give you equivalent picture quality, twice the transmission range, which is equivalent to a six DB transmission gain. And so some of our customers are migrating from MPEG-4 technologies to HEVC technologies on the Vislink HCAM to give them a greater range. And again, that gives them greater flexibility in what they can capture and produce as an event.
David Edwards (27:17):
They can wander further around their stadium, around their outside event, around their golf course. All of those things become possible. So I’m going to skip subject again for you and I’m going talk to you about why latency is important. So when we produce programs, it’s quite likely that an event will be produced with a mixture of wireless cameras and a mixture of wired cameras or line cameras.
David Edwards (27:53):
And obviously those camera feed are likely to be frame synced together so you can switch between them and do your vision mixing and choose the most appropriate camera feed that tells the story. And the director will be busy choosing the feeds that tell the story. That tell the story of the event that’s being produced. And that’s great, but what if those camera feeds from the wireless camera are not co timed towards the line cameras?
David Edwards (28:20):
Towards the ones that are not compressed and just connected via fiber or cable or commercial cable of some sort? Because wireless cameras, in order to deliver the high quality video feeds over the available bandwidth, have to do both video encoding and RF transmission and video decoding and all of those processes take time. The video encodes and the video decode are the dominant factors there. The transmission time is fairly small.
David Edwards (28:56):
And when you are in the brave new world of 4k and HEVC, the video encode and video decode processes are much more complex. There’s many more choices of block size. There’s many more intro modes that you have to choose and many more transform sizes and all of that processing. Plus the fact that you’re dealing with many more pixels in the 4k world.
David Edwards (29:21):
All of that complexity has the potential to take more time. And so that has the potential to increase your through latency of a wireless camera versus the almost zero latency of a wired line camera. And that can cause problems for the event director. If you are switching between a wireless camera and a wired line camera and you make a switch at a critical moment… Within this example, I’m showing in a game of soccer, a goal being stored scored.
David Edwards (29:56):
It may result in you cutting at a certain point and maybe missing that critical action of the goal being scored. Or if you switch the other way between the wired camera and the wireless camera, you may find that you see an event twice/ you may see the ball being kicked twice if there is a significant delay between those wired and wireless feeds.
David Edwards (30:22):
And so that’s not good. That’s confusing for the viewer. That’s confusing for the director. It limits the creative freedom that the event director has in using those wireless cameras if there’s a significant time delay. So some of our customers ask us for zero latency. We can never quite, of course, deliver quite zero latency, but we can shoot. We can go for some very low latency modes.
David Edwards (30:49):
And within the Vislink decoder, the Vislink Quantum decoder, we have optimized and pipelined the video processing flow to have much better MPEG buffer management and we’ve pipelined the data processing there so that we can now achieve just a single frame latency between the capture from the camera to the play out, out of the receiver.
David Edwards (31:18):
And so with a single frame latency in the region of 17 to 20 milliseconds, depending on which frame rate you are at, then you have that creative freedom to cut at will between your wired and wireless camera. And that opens up much more creative opportunities for the event director. And so that’s an important part of a wireless camera as well.
David Edwards (31:44):
In lowest latency mode, as you saw, when I was talking about those triangles of bitrate and video quality and latency, there is a hit to the picture quality. That’s a choice that you or the operator, the production team, will make when they use the wireless cameras. How do they want to balance latency with bitrate, with video quality? That’s a choice, but we provide those tools for people to use those choices in the way that’s most applicable to how they want to present their content.
David Edwards (32:18):
I’m going to switch again to talking about resolution. Not spatial resolution this time. Not 4k resolution versus HD resolution, but temporal resolution in terms of time. When you are dealing with fast moving subjects, how do you deliver that time resolution? And really what we’re talking about here is we’re talking about action replays.
David Edwards (32:44):
So if you want to see what unfurled on screen, was that a foul? How did a particular golf swing go? How did… I showed a picture of the long jump previously there. How did that play out? You may choose to play that back in slow motion. In order to do a slow motion replay. You need a high frame rate. And so potentially normal frame rates or 50 frames per second, or 59 or 60 frames per second, may not be enough in order to deliver that slow motion action replay.
David Edwards (33:21):
You may choose to capture at a much higher frame rate. So three times or four times normal frame rate. And so the Vislink HCAM provides the ability to do that. It will enable you to transmit at three or four times to the normal frame rate you, which you can then take into your image door, into your EVS replay system and then play back smoothly at a much slower frame rate and you can see those action replays being played out for you.
David Edwards (33:49):
So not only that, HCAM and the Quantum receiver gives you a dual path workflow. You can capture at high speed off the camera. You can set your camera to three times or four times normal frame rate and you can pass that high frame rate through the wireless camera system and store all of your frames into your slow motion workflow.
David Edwards (34:12):
But you can also pick out every third or fourth frame and put that through your real time workflow. And so that one wireless camera enables you to fulfill simultaneously both the slow motion workflow and the real time workflow for your director. Again, a creative tool to choose when to do a slow motion replay. All of the frames are there to play that back nice and smoothly.
David Edwards (34:40):
I’ll switch again to camera control. It’s one of the things, Jim, you wanted us talk to. So with camera control, that’s really important if you want to maintain the full capability of that system. When you have a camera operator out in the field, running up and down the sideline events, trying to capture those shots up close and personal, you don’t want to be loading the camera operator with having to manage the iris.
David Edwards (35:09):
You want, as with all of the cameras that may be deployed an event, you want to ensure that you’ve got color balance and the right color balance across all of the cameras that are being used and you want to do that wirelessly. So within the HCAM system, we have the ability to rack close cameras and to control those cameras remotely from the camera operator.
David Edwards (35:32):
And so we have an option which we can deploy called focal point and focal point is a system which we can fit in the OB truck and gives the racking crew the ability to control the iris and optimize the color balance across all of the cameras and do that remotely over a separate UHF telemetry channel. So within that system, we support all of the main cameras, whether it’s a Sony camera, Panasonic camera, whether it’s a Grass Valley camera, we have that capability within the system.
David Edwards (36:09):
And so that enables you to build up a high quality production. So that’s also of the HCAM features and why they’re important. So I’ll just talk you through the interfaces that you get within the Vislink HCAM. So it has four STI inputs that can enable you to do four separate HD encodes, if you wish. But most of our customers choose to operate it when they’re using all of those four inputs.
David Edwards (36:41):
They choose to use it for 4k. So that’s for 3G STI feeds. For 4k, we have the ability to do six gig or 12 gig as a single SDI input for 4k. So that capability is there. In terms of scrambling, you may want to make that RF transmission secure. So we offer BISS and AES scrambling modes over the IF link. In terms of the RF modulation, I talked a bit about NMST modulation and how it is more bitrate efficient within the allocated bandwidth.
David Edwards (37:20):
We also support DVB-T transmission modes and ISDB-T transmission modes as well. And the real answer is there’s no one transmission mode that is good for all applications. We have customers who choose to use ISDB-T for motoring events. ISDB-T has property that it is better for high speed objects. For putting cameras on motor racing cars for onboards. ISDB-T is better for Doppler effects.
David Edwards (37:55):
DVB-T, sometimes better for open water applications and an MST within urban environments. And it’s a matter of choosing the right one for the application where you are. In terms of frequencies, frequency allocations are different across the globe and different in different regions. Typically, our customers choose to use the two gigahertz band or potentially the seven gigahertz band.
David Edwards (38:25):
They’re the most common ones. And so we can offer interchangeable RF modules if you find that you are going to an event or covering different events that may need different frequency bands. So we have that frequency interchangeability, if that’s useful to you. And in terms of the other connectivity, we have connectivity, therefore camera control for tally, for external audio.
David Edwards (38:51):
Although most of our customers and the camera systems typically use embedded audio. So that’s all available as well. So I just wanted to talk for a few more minutes about the receiver. The RF receiver and decoder. Because this is an end to end system in most cases. And I wanted to introduce you to the Vislink Quantum receiver. So Quantum, it’s a full width one RU receiver decoder product there. It offers up to 16 RF inputs and it implements diversity switching in order to give a good, robust reception.
David Edwards (39:30):
So particularly good with its 16 RF inputs for wide area events. I’m thinking golf is a good example application of that, where you may choose to receive antennas over a wide geographical area. Also has been designed with IP infrastructures in mind, particularly to aid remote production. So in this case, you may choose to have a Vislink HCAM roaming a stadium and a Vislink Quantum receiver doing RFD modulation at that stadium.
David Edwards (40:03):
But you may choose to implement remote production techniques and produce your programming back at your broadcast center remotely. And so when many of our customers are looking at implementing and putting Quantum receivers with demodulators installed within the stadium and then having an IP path back to the production center where you have a Quantum receiver that is dedicated for video decode.
David Edwards (40:26):
So IP input to that Quantum receiver at your production center, and that enables all of the production teams to get the benefits of remote production. Fewer people on site, more efficient working practices. So production teams can work on multiple events across the day, you are having less travel, less travel costs, less carbon impacts because of that lower amount of travel.
David Edwards (40:52):
So all of those good reasons for remote production. You can also do multi service decode within a Quantum receiver. So if you have two wireless cameras on site, you can demodulate two wireless camera feeds within the Quantum and you can decode two services within a single Quantum receiver. So it’s a channel dense demodulation and channel dense decode device. And obviously that reduces your cost per camera feed.
David Edwards (41:26):
In terms of network connectivity costs, you may choose to do your remote production, you IP connectivity, between stadium and production center. You may choose to make that connection with guaranteed quality, least fiber connectivity or you may choose to use the unmanaged internet, which obviously has a much lower cost. And in order to make the unmanaged internet reliable and robust, then the Quantum receiver implements the SRT protocol.
David Edwards (41:58):
And so that enables you to potentially cover events at a much lower cost and maybe now you can cover new events that you can’t afford to cover before with wireless cameras. Because the connectivity cost between event and production center is so much lower. And so by using SRT, you have more choices.
David Edwards (42:18):
You have a lower cost production budget. In terms of how you use the device, you have a touchscreen interface, you also have a jockey wheel, which you can rotate and push to configure the unit, obviously a web browser control.
David Edwards (42:35):
And in terms of connectivity, hopefully you can see there. The rear panel of the device is full of connectors, whether they’re the RF inputs on the right… Cut off half of that picture, but you get the impression. Whether you are having ASI transport stream in or out or IP transport stream over the LAN and WAN connections. And decoded video comes out on those SFP ports that are within the middle of the image that I’ve got there for you.
David Edwards (43:08):
And the device is also ready for IP connectivity within, let’s say, the base band decoded video world. So in terms of once you’ve decoded your video, you can route that video around your broadcast center, over IP connectivity through SMPTE 2110 architectures. And that means that it’s much easier and lower costs to route video around your plant. And so SMPTE 2110, whether it’s for HD interfaces or for the full 4k interfaces, all of that capability is there for you within Quantum.
David Edwards (43:45):
So that’s pretty much it as a quick summary of what HCAM allows you to do. As a combined pair between HCAM and Quantum. You can, of course, do your 1080i and 1080p productions in MPEG-4. If you need to have some backwards compatibility, you have HD capability. Whether that’s in 1080p or 4k, you have high dynamic range capability, high frame capability, low latency and a Quantum receiver that gives you the ability to implement remote production and low cost IP links. I think that was just about everything I had to talk about today.
Jim Jachetta (44:28):
Awesome. Awesome. I really like how on the transmitter and the receiver, you have the SFP cages. That kind of makes the design future proof, right? Some new standard or you could, I assume those SFPs could be copper or optical, right? It’ll take, theoretically, any kind of SFP you might stick in there.
David Edwards (44:53):
Yes. And for most of our customers, they’re making the choices to whether those are three gig, STI SFP, or 12 gig SFPs for 4k. That’s typically the choices our customers make depending on the other equipment and the infrastructure that they’ve installed.
David Edwards (45:12):
But for HD, we swap those SFPs out for SMPTE 2110 interfaces. If we’re talking about 4k SMPTE 2110, then we have some expansion ports. You can see some other SFP, another set of SFP cages on the far left of that rear panel of the unit.
Jim Jachetta (45:31):
Very cool. Very cool. And you mentioned your receiver can have 16 zones of RF inputs. So using a golf analogy, so you’d have theoretically antennas at each hole? What would that look like? Maybe describe that for us.
David Edwards (45:50):
That’s typically the sort of application I’m talking about. So when I say 16 RF inputs, I’m not talking about 16 wireless cameras. What I’m really talking about is 16 antennas that may be dedicated to a single demodulation and using diversity reception in order to build up a robust reception. Whether there are reflections or whether the wireless camera is moving to a different geographical area within an event.
David Edwards (46:19):
And golf is a really good example of that. So you may choose to implement a number of antenna sites over a golf course. You may set up an antenna site, which may have multiple antennas in a single area, may be covering a 360 degree view around that area. And that may enable you to cover a number of holes within the golf course. And you may then set up another antenna cluster at a different location around the golf course to enable you to cover some more.
Jim Jachetta (46:53):
So you could combine those groups of antennas and feed that on one RF channel or home run them individually? You have the flexibility, I guess.
David Edwards (47:02):
You have the flexibility. So that’s typically what it’s about. So golf is a good example of that. What else have I seen? We’re coming up to Wimbledon fortnights for tennis and in that scenario, you might see at the end of that championship, you may see the winner. Usually not a British winner, being escorted up from the court up through the building, up onto a balcony.
David Edwards (47:28):
And so some of our customers choose to implement antenna sites as that champion goes on that walk around the Wimbledon complex. And that’s another good example of using multiple antenna feeds.
Jim Jachetta (47:46):
That’s awesome. That’s awesome. I have another question. I’m going to put up a slide, just so people know where to contact VidOvation. Yeah. So question about… I think I know the answer to this, but get your perspective, David.
Jim Jachetta (48:05):
Implementing SRT latency penalty. I know SRT has its own buffering. So any kind of IP transport can’t be zero latency. You’d have to add latency to that. And there’s some adjustments. Is that true? The implementation of SRT?
David Edwards (48:22):
Yes. So SRT has some FEC on it and it also has repeat requests. So if some packets are reordered or dropped, then SRT has, the protocol has the ability to ask for resend of that data and all of those things. Yes. It does have an impact on latency.
David Edwards (48:44):
In that situation, you may find that if you are using SLT in the unmanaged internet, it may not a tier one, a tent pole event. It may be a lower tier event. And so latency may not be quite so critical as it is in other applications. It’s about choosing the tools, choosing the way that you operate that is right for the production that you are doing.
Jim Jachetta (49:15):
I found that SRT, because it’s an open standard and because it works… Obviously it works well on a managed network, but over the public internet unmanaged network, it works well. It’s kind of the protocol, the interoperable protocol of choice now. That the SRT, you could bring it to the studio, you could bring it to the cloud for remote production, Remy production. It really allows your wireless system now to interop with virtually anything. Right?
David Edwards (49:49):
Correct. Yes. There is the SRT Alliance and Alliance members all take the latest bunch of code and implement that. So it’s not quite what you call an open standard. It is an Alliance, but it is an industry adopted standard, shall we say? And that’s fine with us.
Jim Jachetta (50:11):
Right, right, right, right. Right. Well, thank you so much, David. I think that does it for the questions. If anyone listening wants to share this with their colleagues, it takes us about a week. We edit the video, transcribe it, we put it online.
Jim Jachetta (50:33):
I would say a week maximum. We’ll put it up, the recording of this session and a lot of great information, David. Once again, thank you so much for sharing. And you can contact VidOvation.
Jim Jachetta (50:50):
People have a tendency to put an E in VidOvation in VideoVation Ovation. It’s V-D-O-V-A-T-I-O-N.com. That’s our website. You can reach our [email protected] Or you can call us at (949) 777-5435. So thanks, David. Thanks Fallon. Thanks everyone for joining today. And we’d love to help you with your wireless needs. So give us a call, reach out to us anytime. Thank you.
David Edwards (51:23):
Thank you, Jim.