The Internet Video Problem:
The Internet was never designed for Video
- “Best-Effort” Transport Only
- No Prioritization
- Routers Drop Packets to Alleve Congestion
- Dynamic Load Balancing Reorders Packets
Traffic is either UDP or TCP
- UDP Transport is Real-Time, but Lossy
- No Inherent Packet Recovery
- No Guarantees on Packet Order
- TCP Uses Positive Acknowledgement Packet Recovery
- Not Real-time: Pauses for Unrecoverable Packets
- Max Bitrate Limited by Distance & Node Hops
The Internet Video Solution: ARQ – Automatic Repeat Request
- Automatic Repeat reQuest
- Feedback Requests Resending Lost Packets
- Receiver Delay to Allow Time for Recovery
- Add Receiver Buffer to Create Delay
- More Resilience → Larger Buffer
- Lower Latency → Smaller Buffer
- Capable of 100% Recovery
- Capable of Full Recovery with Large Loss %
- Zero Overhead on a Clean (Lossless) Network
Jim Jachetta (00:00:00):
Good morning, everyone. Jim Jacquetta here with VidOvation Corporation. I’m the CTO
and co-founder. Today, we have a very special guest, Ron Fellman, PhD, founder, and CEO of QVidium
Technologies. Thank you, Ron. Welcome.
Ronald D. Fellman (00:00:23):
Thank you, Jim. As you mentioned I’m Ron Fellman. I’m the founder and CEO of QVidium. Before that I
was the founder and CEO of Path1 Network Technologies. Today, I’m going to talk about ARQ for video
transport, how it works, a little bit of the history of how we came to invent it, and our latest product or
HD, HEVC 4k Codec.
Jim Jachetta (00:00:58):
I want to get a little gauge from some of our viewers. Are you folks transporting video over the public
internet for a live broadcast application? We’re we’re talking more than go to webinar, Skype or zoom,
something for broadcast purposes, production purposes, over the top OTT, something like that. Are you
folks streaming video over the public internet right now? Just want to get a feel for where you guys are.
Ron’s company is one of the pioneers in this area and was recently acknowledged. What was it last year,
Ron, by the Academy… Technical Arts and sciences Academy. So he has an Academy award for
technology.
Jim Jachetta (00:02:07):
So here, let me see here. Now let me… Manage the pole, no, that’s not what I want to do. I want to
close and the chair, here we go.
Jim Jachetta (00:02:18):
So here, here are the results. So you can see 60% of people are streaming 40%. Don’t have plans to
stream. Ron, we got to win them over or I hope we’re not wasting their time today. If they have no plans
to stream live, or maybe it’s my marketing people. They only do zoom, and go to meetings. They don’t
do live broadcasts, but thanks everyone for voting. So let me advance the slides here for you, Ron. And
let’s get to it.
Ronald D. Fellman (00:02:58):
All right. So as you may know, if you’re familiar with the internet, when the internet was designed back
in the sixties, it was never designed for live video. In fact, that technology didn’t even exist back then it
was designed for best effort transport. There’s no prioritization. In fact, even multicast is not allowed to
go through the internet. The routers will drop a multicast packets will drop any prioritization. So if you
have congestion at nodes of multiple screens or trying to go through a particular note, the routers are
designed to drop the packets. They’ll sometimes have various algorithms to decide what goes first. But
the bottom line is if you’re trying to get packets over the internet, not only can you not rely on the
packets getting through, but some internet service providers do things like dynamic load balancing to
alleviate congestion in the middle of sending a bunch of packets out, they’ll switch the routing around,
which would cause the packets to come in out of order.
Ronald D. Fellman (00:04:15):
So if you’re trying to send video, you’re going to have to understand that packets can get dropped, they
can get out of order, there could be random delays, which we call jitter. None of these things are good
for trying to get any kind of live broadcast video. Of course, if you’re streaming video where it’s basically
short file chunks that are being buffered and played out, it’s not much of a problem. So the biggest issue
is if you’re trying to do a live low-latency broadcast over the internet. [crosstalk 00:04:48]
Jim Jachetta (00:04:49):
So Ron, why did you invent this technology? What was, I think you had certain customers that were
having a problem trying to stream through the internet, right? Why did this all come about?
Ronald D. Fellman (00:05:03):
So to get to the why, it really started with the telemedicine, actually. Before starting Path1, I had started
and was working at Path1 Network Technologies, and all of our customers there were large
broadcasters. They had their own private network. I left Path1. And I was approached by a friend. I used
to be a professor at UCLA and a former colleague there knew the doctors in the stroke center at UCLA.
So these two neurologists that they had a big problem. They have this new wonder drug called TPA. And
it’s like a miracle drug that will dissolve clots very quickly. And if somebody has a stroke, if they have…
There’s two types of strokes, there’s an aneurysm where the blood vessel bursts, but then there’s a
thrombolytic stroke where there’s a clot. And if they had a thrombolytic stroke, this is like a miracle
cure.
Ronald D. Fellman (00:06:09):
But if you give it to somebody who had an aneurysm, then you’ve killed them. So you needed a way to
diagnose people. And there aren’t a lot of neurologists around the place. If you’re in the middle of…
Well, one of our first sites was Brawley, California, and there aren’t a lot of neurologists in these remote
rural areas. So these neurologists at UCS wanted to be able to diagnose people far away. And because of
this drug that was available in all the emergency rooms, it was mostly used for heart attacks. They now
had an opportunity to literally cure people who had strokes that were miles away from the nearest
neurologist. And so they came to me saying we needed good quality video. The video couldn’t have any
loss packets, because if you lose a packet, you’re going to get some kind of jitter or pausing in the video
and that defeats the whole purpose of being able to diagnose the patient.
Ronald D. Fellman (00:07:10):
So neurologist is going to have a patient do something like raise your arm and then slowly lower it.
Where if somebody had a stroke and their weakness, they could see it by the jerkiness or jitteriness in
the motion, but the same kind of problems can appear if you have packet loss. So they needed to rely on
the fact that the video is coming through, every single packet was getting through there and that you
are getting smooth, reliable feed. And so that was the Genesis or the inspiration to create this ARQ for
the internet. We always had… ARQ is not a new concept. And there’s a slide on that later on. Maybe
we’re jumping ahead of ourselves, but if we look going back to the slide over here, if you look at what
the internet was designed for, there’s really basically two types of traffic that you can send on the
internet.
Ronald D. Fellman (00:08:19):
There’s UDP, which is User Datagram Protocol, which basically is, there’s like the wild West. You sent a
pack it up and you have no idea if it’s going to get to the other side or not, or if it’s going to get it that
they’re out of order. So you’d have to do things to UDP, to make it work for video, which is what we do
with our ARP technology. And then there’s TCP. And TCP is designed for reliable transport. It’s a
Transmission Control Protocol, where you have these windows that have packets that get sent across.
And with every window you have an acknowledgement. So it’s great for things like email or web pages
to make sure that all the email goes through, but there’s no real time guarantees on this. It could pause
and indefinitely waiting for the packet to get over to the other side.
Ronald D. Fellman (00:09:21):
Pauses could be 30 seconds or longer. Every single time a window of packets gets to the other side. It
has to wait for a positive acknowledgement from the other side, before it can send out another window
of packets and this causes a limitation on the maximum bit rate. If you have a lot of PoPs, you have to go
around the world, maybe up through a satellite and you have to wait each time for an
acknowledgement. Well, the longer the latency is, the more time it takes before you can get this
acknowledgement and the slower the maximum bit rate that you can send through with TCP is. So, TCP
is okay if you have a relatively good link and you’re just doing streaming where you don’t care, if you’re
buffering up the video and there’s a lot of delay, but it’s really not appropriate for a live broadcast.
Jim Jachetta (00:10:17):
So your approach is to take a UDP stream and, and you make it more robust with ARQ, which we’ll
describe on the following slides.
Ronald D. Fellman (00:10:29):
Exactly. So we start with UDP and if you go to the next slide, you’ll see sure. What it is we do.
Jim Jachetta (00:10:38):
I hear you, Ron, but maybe it’s a little muffled since you’re using the internal mic. Maybe lean into your
mic little bit. So we can hear all your good stuff here. Let me advance. Yeah, I think you’re using the
built-in mic, right. On your computer.
Ronald D. Fellman (00:10:55):
Yeah. I do have a headset if you want me to use it.
Jim Jachetta (00:10:57):
No, that’s much better now with you leaning in. That’s perfect.
Ronald D. Fellman (00:10:59):
All right. Okay. So what we decided to do… These neurologists came to me, they wanted the package to
all be there and not to get lost. So the idea was to have a feedback mechanism. Before that there
mechanisms to try to fix packet loss, you’d send extra packets along with the stream. And if some of the
packets got lost in the stream, you use the extra packets, check some packets to try to fill in the gaps.
But that only goes so far. If you have large amounts of random packet loss, FEC really doesn’t work well.
Back when I was at Path1, another thing we tried in addition to FEC, by the way, the FEC we did at
Path1, it became standard ST2022, which was also known as Pro Impact Forward Air Correction. But
again, that was only good really for private networks.
Ronald D. Fellman (00:12:00):
The other thing they tried doing was sending a completely redundant second stream. Again, if you have
large amounts of packet loss, this might not even fix things. And the problem with that is you’re also
doubling the bandwidth. So after leaving Path1 and getting a touch with these doctors, thought the best
thing to do would be to have a feedback mechanism. So that’s really what ARQ is all about. If you lose a
packet, you detect that on receiving it. You make sure you have sequence numbers. And those are
already a protocol in place for doing that called RTP, which puts a little header on UDP packets. It adds in
sequence numbers, it adds in a timestamp. And now we use those mechanisms, those placeholders that
they have in the RTP packets to have a sequence number, to see if there’s a gap in the sequence
number. And if there is you have the receiver, send the packet back upstream, requesting a retransmission
of whatever packets were lost. So-
Jim Jachetta (00:13:11):
You mentioned Forward Error Correction, too. Forward Error Correction, so say you sent it, you set
Forward Error Correction for like 20% redundancy. But if everything is operating normally, and there are
no packet losses, you’ve now taken 20% away from the usable bandwidth for video where you can push
through higher quality video. So in a sense, this only kicks in when it’s needed, right?
Ronald D. Fellman (00:13:39):
That’s an excellent point. I’m glad you brought that up. And, and that’s one of the points we mentioned
here at the bottom.
Ronald D. Fellman (00:13:45):
There’s literally zero overhead. If you have a network, unlike FEC with ARQ, you’re only sending those
retransmission requests, if you have packet loss and any kind of packet loss, any kind of patterns of
packet loss you could correct them, because you’re asking for whatever packets are lost and you can ask
for it over and over again, unlike FEC, where you only got whatever packets are given to you, you could
try to use them, but if you don’t have enough packets to replace the lost ones, you’re out of luck, and as
Jim said, you’re sending all those extra packets without any guarantees that they’re going to fix it.
Jim Jachetta (00:14:27):
Right. Or I would say in my mind, FEC is kind of like a brute force approach. We’ll send all this
redundancy through 20% redundancy and hope the packets that we don’t… So if we set it at 20%. We
hope we don’t exceed 20% of loss, but you could have a frame that loses 30% of its packets. FEC won’t
be able to recover it where ARQ could fill in all those missing pieces.
Ronald D. Fellman (00:14:57):
That’s right. So ARQ is capable of a hundred percent packet recovery. It can, even with very high loss
rates, I mean, we’ve seen loss rates of 40, 50%. The really the only downside there is you need extra
margin of overhead to be able to send large bursts to recover packets. But again, if you don’t have a lot
of loss to begin with, you don’t need much overhead. Now the downside, the only negative on ARQ, is
you do create a buffer on the receive side. So that what you do is at the receiving side, instead of just
receiving the packet and sending it out to the video, decoder, you put a little buffer in the way. So this
buffer holds the packet long enough, so that if you have any gaps in there, you have time for the
receiver to send the packet upstream to the encoder, to the transmitter.
Ronald D. Fellman (00:15:57):
And the encoder doesn’t have to be the transmitter. We actually have ARQ software that you can use if
you have an encoder that doesn’t have a ARQ included in it. And so whatever the sender is, the receiver
is going to send the packet upstream, the retransmission request packet for the sender to send back a
copy of whatever packets are missing. And then they get put in the buffer, put back into the right order
in that buffer so that when they get played out or the buffer, they get played out in the right order and
with all the packets, hopefully there, if you need more time so that you have more chances of asking for
a retransmission, because with ARQ, you don’t just ask once, you can ask multiple times in case there’s
packet loss with the upstream packets as well, or there’s additional packet loss for the retransmitted
packet.
Ronald D. Fellman (00:16:52):
So you can ask multiple times, you just need a bigger buffer that gives you more resilience. You need
lower latency. Well, you have a smaller buffer and maybe there’ll be a little less resilience, but the buffer
size isn’t actually that large. If you look at what FEC requires, FEC you have to have a buffer of all of the
packets in the window that you’re doing the FEC over. You have to be able to see if those packets,
before you can start processing the text on packets. So that takes a certain amount of delay. If you are
on a wireless network or not too many PoPs, where the roundtrip time is fairly low, you might actually
have a smaller roundtrip time or smaller delay using ARQ than FEC, in addition.
Jim Jachetta (00:17:38):
I know many systems, some of the systems we utilize use a combination of both some FEC and ARQ. I
think some vendors even have dynamic element to the FEC. FEC kind of does the set to a minimum. If
there the connection is going poor, it goes up and down. So you’re a proponent of not using FEC at all or
do you see cases where you might use a combination of ARQ and FEC? What are your thoughts on that?
Ronald D. Fellman (00:18:14):
That’s an interesting question, because when we first started doing this, we did actually combine FEC
with ARQ. The telemedicine system that we created for the neurologist did have the ability to both do
FEC, and then ARQ on top of it. But we found that it really didn’t help. You’re using up or wasting the
extra bandwidth for the FEC, but the ARQ is so efficient and so effective that having FEC in addition to it,
was actually just the waste of bandwidth and time, because you have the additional latency of
processing the FEC buffer. So we dropped that in the products we started making for broadcasters,
because the key there was to try to get lower latency while you’re doing the error recovery.
Jim Jachetta (00:19:02):
Very good, very good. Are you done with this slide? You want me to continue?
Ronald D. Fellman (00:19:06):
Yeah, lets go to the next one. So one way that you could envision using this is if you have multiple
encoders, you can send them up to a central location, maybe a server in the cloud, that’s running an
ARQ proxy server. So like I said, we have our ARQ both within our products and I’m showing the… Or
QVCodec4k or new 4k boxes, example of some encoders sending up into the internet. You can have
Amazon web services, for example, hosts some software that runs the ARQ. We’ve had many
installations where we would use ARQ to get a live stream up into a server, and then use… Maybe to
send it to a CDN, and at the CDN, they can even send out a stream with RTMP or some other protocol.
But if you really wanted low latency, you’re going to want ARQ on both ends.
Ronald D. Fellman (00:20:05):
And so you can have the proxy server that’s middle sent out to multiple destinations to keep the latency
low. You can use ARQ with an encryption. And we do that all the time. And this kind of technique I’m
showing here. It really minimizes the amount of bandwidth, because most facilities don’t have a lot of
upstream bandwidth, but there’s plenty of bandwidth in the cloud. So if you want to distribute a stream
to hundreds of locations, and you don’t have a lot of bandwidth where you are, you just need to send
one copy of the stream up to the server where the ARQ can replicate the screen and send it to as many
places as you want with plenty of bandwidth to do that.
Jim Jachetta (00:20:49):
Well, what was I going to say? So one of the things, Ron, is I guess, one of your big differentiators or
your strengths or a strength of this technique is low latency. I know with some of our bonded cellular
partners, public internet now is pretty reliable. It’s statistically, the losses are now minimal. You can get
higher quality internet connections. I think with cellular, it’s a different animal. We have, the jitter is
crazy. The latency is yo-yoing up and down, the bandwidth is going up and down. So I think it’s a much
more challenging environment. And I think in that sense, maybe a little bit of FEC and ARQ is warranted,
but your sweet spot is low latency over the public internet. Isn’t that correct?
Ronald D. Fellman (00:21:58):
That’s right. And you’ll see that in the next slide, it’s really a cornerstone of the patent that we have. So
this is the [inaudible 00:22:06] came right out of the path that we have for ARQ. And you’ll notice a key
component of it is an area that we call Clock Synch. Clock synchronization, what we do is we timestamp
the packets when they go out and we have a clock on the receiver that’s synchronized to the timestamp
of the packet when it goes out. Now you could use any type of clock. It could be MTP timestamps, but
our patent covers all of that. So this is unique to, QVidium, any other kind of FPC, or I’m sorry, any other
kind of ARQ technology would be infringing on our patent if they were to use this kind of clock
synchronization to try to minimize the latency, and why that’s so important is because when you create
this extra clock at the receiving end, so that you can, you know-
Ronald D. Fellman (00:23:03):
At the receiving end so that you can wait long enough to put all these packets to gather back any
missing packets and put them in the right order. You want to be able to make that packet buffer trust
the right side, just as small as it needs to be, without having to wait extra. So what you can do is if you
have everything synchronized and you know what the delay is, which you can measure, from the sender
to the receiver, then you could time when the packets leaves the buffer using your clock. And that really
allows you to create ARQ system with absolute lowest delay.
Ronald D. Fellman (00:23:39):
So going into this diagram, you see how this works, is in your encoder, you have the packets going out of
your encoder, going to the transmit unit, which is where it is stamping the packet. The green dots there
it’s sort of is to animate or just show the packets leaving the transmitter, leaving the encoder, going into
receiving unit. And the red is to show the reverse stream, the re-transmission request stream going back
up to the sending side where it’s asking for the additional packets. Now what you do in ARQ on the
sending side, there really isn’t that much, other than stamping the packet with the time clock and
processing requests coming back in. You have an ARQ packet store where you’re just storing the packets
as they’re going out. That doesn’t add any extra latency. So at the same time the packet is going out, it’s
going into this ARQ packet store.
Jim Jachetta (00:24:42):
Yeah. But so on the encoder side, you don’t usually think of buffering. You think of buffering on the
receiver for FEC or ARQ. In order for the packets to be retransmitted, you have to have a bucket to store
the packets for retransmission. So that’s the ARQ package storage.
Ronald D. Fellman (00:25:02):
That’s right. But that doesn’t delay anything because [crosstalk 00:25:06] same time as the packets go
out, but it doesn’t eat up any time doing that.
Jim Jachetta (00:25:13):
And based on… How deep is that storage usually, a couple of frames, a couple of lines? How do you
gauge how big that is, that storage?
Ronald D. Fellman (00:25:25):
Well, usually part of that really depends on the sequence number. What we do is an easy
implementation, and again, the pathing covers all sorts of implementations where you can have much
larger sequence numbers, but the typical RTP packet header only has 16 pith, which has 64,000 packets
worth of buffer space of sequence number, really. So we could store up the 64,000 packets, which is
enough or even relatively high bit rates. The higher the bit rate, the more storage you would need
because [crosstalk 00:03:06].
Jim Jachetta (00:26:08):
Correct.
Ronald D. Fellman (00:26:09):
But it’s really not been less of a problem.
Jim Jachetta (00:26:16):
Well and then I guess that begs the question is this is all automatic. So me, the user, we don’t even have
to think about this. The mechanism just runs. I don’t have to think about the depth of the buffer or can
you adjust the buffer depth on the code side?
Ronald D. Fellman (00:26:38):
That’s also a really good question. Let me talk about that a little bit. The way we’ve implemented ARQ,
the calibration is completely automatic and there’s very little that the user has to do and yet it optimizes
the error recovery. The only real parameter that you have to think about is how much extra delay that
you want to add to the system. More delay, the more chance it can retransmit the extra packets. The
more tries you have to recover any loss packets.
Ronald D. Fellman (00:27:09):
And so if you have complete control over that. In our systems, you could set the lathe to zero, if you
want, and then the system will measure the roundtrip time, figure out what the minimum value is and
then put that in there and create a buffer that’s really minimal. It could be just one round trip time. We
have a robust mode where it makes sure it’s at least two roundtrip times, and then you don’t even have
to think about it. You just started up with.. Let it do its thing with automatic configuration and you don’t
have to set anything.
Ronald D. Fellman (00:27:48):
On the other hand, if you did want to completely configure it yourself, you have access to all those
parameters. You can, in fact, override the roundtrip time. You could override the amount of jitter that
the system is calculating in the network. Because if there’s jitter in the system, you have to add that to
the buffer so that the buffer can buffer out any jitter. You know, packets are being held at nodes
throughout the internet or random amounts of time that will create random delays of the packet, which
we call jitter, which we want to buffer out and have a little extra buffer space to do so.
Ronald D. Fellman (00:28:23):
So the internet itself, even if there’s no packet loss at all, you’re going to want to have a little bit of
buffering because otherwise the packets arriving at your decoder will have gaps of time before they
come in, which would be the jitter paused by-
Jim Jachetta (00:28:38):
Yeah, they might bot be missing, but they might be out of order. So you need a little bit of time to put
them in the proper order, correct?
Ronald D. Fellman (00:28:44):
Exactly. They could be out of order. They could just be delayed even, even if they’re not out of order.
And that could cause a decoder to run dry, which would also cause freezing. So you have to have at least
enough buffering in there to buffer out the jitter, to buffer out any reordering that might take place.
And again, this is all done basically automatically in our system, but you can put in larger times in our
system if you happen to know that there’s some reordering going on and you know the length of time,
the latency that that happens over. So you have both the ability to have it being done completely
automatically and the ability to override the settings and adjust for specific types of scenarios.
Jim Jachetta (00:29:32):
Very good. All right. Let me advance about the animation started again. There we go.
Ronald D. Fellman (00:29:39):
We were talking a little bit before about the origins of ARQ. You see a picture over there. You see the
governor at the time was Arnold Schwarzenegger and came to see the two neurologists or the two guys
[crosstalk 00:29:55]. So this was back in 2004, there was an article written, I think it was a Union Tribute
article, or some such a magazine or newspaper about the neurologists. We had systems set up not only
throughout California, but in other States. A lot of systems in Arizona, Montana, even in New York. At
one point, this was under VF technology with which we started immediately after path one, then got
subsumed by QVidium.
Ronald D. Fellman (00:30:32):
So as I mentioned the concept of ARQ or automatic repeat or retransmission request, I think it’s
technically it’s repeat request, it’s not new. It was used in short wave radio. There’s a paper, even as
back as 1963 about selected repeat ARQ. Most of the systems that were called ARQ were positive
acknowledgement. In other words, like TCP, send some packets out or even one packet, and you have to
wait for a reply before he could send more packets out.
Ronald D. Fellman (00:31:11):
And that has its own problems. But in video, the idea is that if you’re missing a packet and you haven’t
asked for it, it’s already been played out. That part of the video seems played out, you can’t go back and
reinsert it. So there’s a real time live aspect to sending live video. So it became… We thought it was
much more efficient, smarter to have freed a negative acknowledgement system. So we created the
ARQ, unlike the other ARQ systems that were around at the time, that only asked for packets if they
were needed, if it was missing. And it also had a timeout mechanism because it is video after all that
we’re optimizing this for, which never been done before, so that if the packet was missing, but the
portion of the video stream has already been played out. There’s no use asking for it again.
Jim Jachetta (00:32:06):
[inaudible 00:32:06] now it’s too late.
Ronald D. Fellman (00:32:09):
It’s too late. So it wasn’t, unlike TCP which is trying to require every last packet to always get there, we
realized there could be certain limitations with the internet. There could be unrecoverable packets. And
by the way, in the telemedicine system, there was a little red square, a little square on the indicator on
the screen that the doctors could look at that would be green of all the packets were being recovered.
And the video they’re looking at was perfect, or if there was some unrecoverable packet loss. And so
they knew at that portion of the video, they knew not to trust it.
Jim Jachetta (00:32:47):
So you were saying that when we were talking the other day that this necessity, this telemedicine
necessity, it’s matter of life and death. Like you said, if they diagnosed the stroke incorrectly, gave the
wrong medicine, they could kill the patient. And I know we do a lot with sports, the tracking of a
baseball, leaving the bat, going over a sea of faces. Or the basketball, the orange ball, going over a sea of
faces. If you’re dropping packets, you get artifacts, you get jitter, you get stuttering, it’s dropping a
frame. So if you’re trying to do this neurology test… Hey, if we drop a packet during Monday night
football, no one’s going to die.
Jim Jachetta (00:33:39):
So I say that your application is more critical than broadcast television. So you came in from the roots of
something way more critical. And so instead of if the decoder… I guess it’s like the expression, no news
is good news. If the decoder doesn’t say, “Hey”, in code or “I lost package 101”, it assumes it got it. So
the only time the ARQ makes a request is if, like you said, the negative acknowledgement. I didn’t realize
that that was your nuance. That was your… So that’s your patent? That’s how your trying to do it?
Ronald D. Fellman (00:34:21):
Really it’s combining that with the synchronization. So the patent has both aspects in there. There was a,
around the time that we did ARQ, Fujitsu had a positive acknowledgement ARQ for video. And again,
nobody had synchronization either. So those two things combined is what’s unique in our patent.
Ronald D. Fellman (00:34:43):
And also around that time when we were doing this, Polycom had a video conferencing system. Some
people were doing some video, but what they did, if they has packet loss was they would interpolate the
video to fill in the gaps. So they would sort of budget. And you couldn’t do that. In a stroke case, if you
have this thing where there’s suddenly a packet loss and suddenly you’d have a pause or skip and it
would suddenly go down, in the Polycom type of system, they fill in the gaps and they would make it
look smooth, even if it wasn’t. And that would be disastrous if you’re trying to diagnose a stroke patient
who might actually had a weakness [ crosstalk 00:00:35:32].
Jim Jachetta (00:35:33):
And then you smoothed it out. Or I could think of another example, you’re looking at an MRI or an X-ray,
and there’s a black spot, there’s cancer, there’s a problem, there’s a tumor. Maybe it’s very small and
you smooth that out or extrapolate that. And you miss something. Some of our other inventors call what
you’re talking about concealment. So like in television, if there’s an area of green grass and a packet gets
dropped forward error correction, ARQ can’t fix it, the decoder is going to make the assumption if all
everything around it is green, I’m going to assume the missing packet is green, but maybe that missing
packets a golf ball or something like that.
Ronald D. Fellman (00:36:21):
Exactly. That’s a good analogy.
Jim Jachetta (00:36:23):
Yeah.
Ronald D. Fellman (00:36:25):
So this was back in 2003 when we started doing that. Then we started to… Here we had some contacts
in the broadcast industry, still from my PathOne days and they were saying they had systems on course,
was a broadcaster in France, I think it was RTO, that was saying that they wanted to be able to take ASI
and send it over the internet to another gateway at the other side, to get it up to a sound light.
Ronald D. Fellman (00:36:55):
And so our very first systems that we made into video were actually ASI to IP gateways. Later, we
started making encoders. So we started making encoders I’d say about 2007, and have been making
encoders and decoders ever since.
Jim Jachetta (00:37:13):
So you just took an ASI stream made by another appliance and then just wrapped it or put it in your own
transport with ARQ capability? Or you added ARQ capabilities to an ASI stream basically?
Ronald D. Fellman (00:37:30):
Yeah. So what we did was we took a PC back in those days, we put in an ASI daughtercard in the PC, like
a PCI card, that had an ASI interface and then it would send the bits into the PC. We wrote software to
packetize it, add the ARQ to it. At the other end, there’d be another PC with another ASI card, which
would do the processing on it. And then feed it back to the ASI card. And so you have ASI in on one side,
went out with the ARQ over the internet, got received on the other side of another gateway. These
would be basically gateway units. So to take the IP and then convert it back to ASI at the receiving end.
[crosstalk 00:38:17]
Jim Jachetta (00:38:17):
You touched on it earlier that you do this today. So many customers are, “Well, I have an older H.264
Infrastructure, my encoders and decoders, my encoders and IRDs are working, but we are getting some
packet loss”. Innovation, QVidium can put an appliance at either end, at either facility where we route
the legacy streams through them, add ARQ capability on top, send it across the public internet, go into
another appliance that decodes the ARQ or adds the missing packets back in. Is that correct, Ron? That
you don’t have to… Of course, we’d love to sell you the latest and greatest technology, but there are
some very economical implementations or methodology where we can add state-of-the-art ARQ to
older infrastructure. Is that correct?
Ronald D. Fellman (00:39:19):
Not only, yes. And that’s absolutely correct. And to put a finer point on it, we’ve had customers who’ve
even used raspberry Pi’s. It doesn’t have to be a powerful device. It could be a very inexpensive little
raspberry Pi that’s the size of a credit card and it could be doing the ARQ processing.
Jim Jachetta (00:39:44):
So it’s not very… Your software is pretty thin. It’s a thin client.
Ronald D. Fellman (00:39:52):
It doesn’t require much to run on. And the software will run on pretty much any Linux system. And we
also have a version for Windows.
Jim Jachetta (00:40:01):
Awesome. Are you done with this slide? You want me to advance?
Ronald D. Fellman (00:40:04):
Well, right now I’ll just mention if you want to get a copy of that, the software is at
www.QVidium.com/proxy. Forward slash P-R-O-X-Y. We could send you a trial license and then you
could talk to Jim about the getting permits.
Jim Jachetta (00:40:20):
That’s awesome.
Jim Jachetta (00:40:21):
Oh, folks, I should mention, my marketing folks are telling me the question module for some reason is
not working. If you have any questions, why don’t you just text them to me? My cell phone number is
(516) 551-3201. So text any questions to (516) 551-3201.
Jim Jachetta (00:40:50):
I advanced Ron.
Ronald D. Fellman (00:40:51):
All right, great. So this is just again, showing the telemedicine system. We don’t have to spend too much
time here, but you could see what the thing looked like. It was basically a IV pole with a camera
mounted on the top of that the doctors could remotely maneuver. The doctors, you can see a doctor
over in the lower left, that physician is now in charge of the current, basically Cedars-Sinai Medical
Center in Los Angeles now, the heart cardiac unit. And doctor over in the lower right pretending to be a
patient, that’s a Doctor Brett Myer.
Ronald D. Fellman (00:41:35):
You could sort of see the camera pointing at him, but the doctor… Remember this is 2003, 2004, 2005.
This is before you had really good infrastructure with cell phones and cellular modem. So there was an
external modem that they were getting from Verizon that they would attach to their laptop. So they can
literally have their laptop anywhere. They could be driving on the freeway, get a call and set up their
laptop with this little cellular modem off on the side. So they can do this over a regular cellular network.
Connect into the internet and whatever packet losses occurred, both over the wireless and the internet,
the whole combination of packet loss would be corrected by the ARQ along with reordering the jitter
that could be introduced by the network.
Jim Jachetta (00:42:36):
Right. Well this approach is now becoming the norm. A friend of mine, a couple of friends of mine are
doctors, and they split their time. They do have to go in and see patients face to face for certain types of
treatments. But a lot of the followup, a lot of the just taking a patient. A nurse will be with the patient,
but the doctor will be at home. So I think we’re all adjusting to working with Zoom or go to meeting or
something like that.
Jim Jachetta (00:43:15):
But for those situations where you need high resolution, high fidelity video innovation, and QVidium can
certainly help you with that. And that applications for this.
Jim Jachetta (00:43:27):
So here’s some of your patents. So as you mentioned Ron, last year, you guys were recognized by the
technical Emmy committee. Both you and Fujitsu were credited with inventing ARQ, but with slightly
different techniques. As you said, the negative acknowledgement with the timestamp, that was your
contribution, and then Fujitsu had their approach. You were quite a pioneer there in the early days. So
maybe tell us a little bit more about some of these patents that you have.
Ronald D. Fellman (00:44:11):
So as you can see, I listed three of them. Of course, there are other patents before that, when I was at
PathOne that were all for private networks. So what’s unique here is these patents were really designed
for getting video or the public internet. And that’s what’s unique here.
Ronald D. Fellman (00:44:29):
The first one is the general ARQ patent. That’s the one that combines the negative acknowledgements
and the clock synchronization to the standard ARQ type of technology. And so that’s the first patent we
talked about. We go into a little more detail in terms of patents and specifically for the internet clock
synchronization that you could use that synchronization or other synchronizations or other types of
clock mechanisms for that first patent. The second patent is another, is the clock synchronization we
found particularly optimal for the internet.
Ronald D. Fellman (00:45:10):
And then we also were dabbling in FEC, as I mentioned, and we created a type of adaptive FEC that was
designed for specifically for video, so that if you have eye frames, you would put in more checks, some
packets, versus if you had B frames or P frames, you have fewer.
Ronald D. Fellman (00:45:28):
And it allowed FTC to be used on a variable pit break kind of a system. And it had a more efficient kind of
check, some processing. But again, we found that it wasn’t really necessary. The ARQ did such a great
job, we just dropped the FEC.
Ronald D. Fellman (00:45:53):
So one thing is we’ve created all our systems that does not require having a server in the middle. I did
show that diagram in the middle of it.
Ronald D. Fellman (00:46:03):
… having a server in the middle. I did show that diagram before, where you can have a server in the
middle if you wanted to, but all of our systems are designed really for point-to-point so that you can just
set up an encoder in one place and decoder anywhere else in the world. And you can just have those
two units talking to each other without a third server or any other device on the internet needing to be
in the loop at all. There’s no-
Jim Jachetta (00:46:29):
When might you need a server in the middle? It would be more of like a CDN where it’s one to many, is
that when you might need a server?
Ronald D. Fellman (00:46:38):
That’s one place. Another place where a server could be useful, and again we have this technology with
our software that allows you to do it, is if you didn’t have any kind of way of accessing the firewall. Most
routers, if you have an encoder or a decoder that’s sitting behind a firewall, we have technology that can
deal with automatically getting the packets back through the firewall on one side or the other. We have
our system set up so that you could either do what we call push mode, where you just on the encoder
side put in the IP address for the destination, and you send this packet over to the other side. But in that
case, if the decoder is behind a firewall, you would need to open up the firewall, at least have a port
forwarding rule so that the video stream could get over to the decoder. But you don’t need to do
anything for the reverse screen, the retransmission stream coming back up from the decoder to the
encoder it’s automatic. You don’t have to put any kind of rules on the encoder side.
Ronald D. Fellman (00:47:50):
Or you could do it the other way around. You could have a pull mode, in which case you open up some
port forwarding rules on the sending side or the encoder side, but on the decoder side it just initiates
the stream and you don’t have to worry about setting up any rules. But what if you’re in a situation
where both the encoder and the decoder are sitting behind firewalls and you don’t have any access or
ability to open up any ports in either of those firewalls? Well in that case you have a server up in the
cloud and you can set up the encoder to be in push mode, the receiver to be in pull mode and you don’t
have to worry about opening up firewalls on either end.
Jim Jachetta (00:48:37):
Ah, okay, okay.
Ronald D. Fellman (00:48:40):
So to answer your question, that’s another reason to have a server in the middle.
Jim Jachetta (00:48:45):
Okay. Okay.
Ronald D. Fellman (00:48:49):
Let’s see. Interoperable… And as I mentioned, we designed our ARQ unlike some other sorts of ARQ
systems that now are starting to appear. Our system was designed with RTP protocol in mind so that if
you had a receiver that doesn’t have ARQ technology in it, then it’s still going to receive the stream. It
looks like a standard video over RTP stream. It just, if the receiver doesn’t have the ARQ, it’s just not
going to recovered the lost packets but you’re going to be able to play it on anything, including VLC and
there’s free software. So you have that capability.
Ronald D. Fellman (00:49:33):
We have technology built into our box to limit the effect of what I could call positive feedback or packet
storms. The one potential problem with ARQ is if you have a limited amount of bandwidth and you have
some packet loss, and then you start asking for retransmissions, then you’re sending additional packets
on this limited bandwidth connection, which could interfere with other packets that you’re trying to
send on the connection, which could then cause more packet loss, which could cause more
retransmissions, which could cause more packet loss and you get this sort of positive feedback effect.
We actually have technology in our systems to deal with that so that those kind of problems don’t
happen. We can-
Jim Jachetta (00:50:20):
Well, right. If the pipe is collapsing you could actually make the problem worse, or you just at a certain
point you got to let it fail, right?
Ronald D. Fellman (00:50:31):
Exactly, yeah. But what we do is we limit the retransmissions and so we basically limit the problem from
even occurring if it starts to happen.
Jim Jachetta (00:50:44):
Very cool, very cool. I think we covered everything on this slide. Let me advance.
Ronald D. Fellman (00:50:52):
Just one other thing on that slide. In the setup, you only really need to open up one port. There are
technologies like RIST that require two ports or more to actually… It depends how you set it up but with
our technology you only have to open up one port if you’re pushing the stream out from the encoder to
the decoder in terms of what you have to open up on the firewall on the receiving end.
Jim Jachetta (00:51:21):
From working in the trenches and I’m sure you can agree, but from our perspective getting those ports
open sometimes is the biggest challenge. I know at some of the major networks the IT departments are
so overworked there they’re like, “Oh yeah, we’ll open that port in six months.” But the event is
Saturday, I need the port open now.
Ronald D. Fellman (00:51:43):
Right and that’s why we have the other ability to just connect to a server and then you don’t have to
worry about it, or the other mode of operation.
Jim Jachetta (00:51:53):
Very cool.
Ronald D. Fellman (00:51:55):
You can go to the next one.
Jim Jachetta (00:51:56):
But even one port under normal circumstances, that’s amazing. I don’t know of any other system that
can work with only one port.
Ronald D. Fellman (00:52:07):
Okay, so that was really what I had to say about ARQ. At this point in the talk we’re talking about our
products. And we’ll be focusing mostly on the newest product, and we have a whole line of products.
We have a H.264 HD encoder and decoder. They’re our workhorses, they’re designed for broadcasters,
they do the closed captioning, they’re designed for 24/7 operation. Instead of one stereo pair like some
of the cheaper encoders or decoders have, this can do two stereo pairs, can encode or decode up to
1080P60, has 3G-SDI I/O on it, and even has Dolby audio. We also in addition to ARQ, that unit can
handle RTMP if you want to send to a CDN.
Ronald D. Fellman (00:53:06):
As we discuss later on with the QVCodec, we’ve even expanded that, so in addition to RTMP and HLS,
which I should have mentioned the current box can do it, we can also do other ARQs like SRT and Zixi,
we’ll get to that later. We have our media server software, so if you have a box that’s a legacy unit, you
wanted to add ARQ to it, as I mentioned you can do that very inexpensively in terms of the hardware
that you would need. Like I said, a Raspberry Pi or even deployed in the internet cloud.
Ronald D. Fellman (00:53:43):
We actually have two new 4K products. We have our higher-end unit, which is the 4K HEVC Codec. It’s
one box that can be set up to do encoding, or decoding, or both. It has the ability, and we’ll get to that in
the next slide, I guess maybe this is a good time to switch to the other slide.
Ronald D. Fellman (00:54:04):
Well actually with all of our products, we really designed the products to be in operational 24/7. We
have hardware watchdogs, software watchdogs, the hardware is designed so that it won’t stop, it will
just keep running. We try to make them compact. All of our products are one RU high with a half-width
rack, so you could put two in one RU slot. They have built-in internal power supply and the ability to
plug in an external power supply. So you could either run it just off of DC, you can run it just off the AC,
or you can have a failover capability where you can have it plugged into the second power supply
through DC, and if the AC fails it automatically instantaneously switches to the DC.
Ronald D. Fellman (00:55:05):
I already mentioned how you could do both push or pull mode, or both at the same time. You could
have an encoder that’s pushing to a particular, or the software server sending the stream out and having
some of the decoders initiating the feed, or having some of the feeds being initiated by the encoder.
Ronald D. Fellman (00:55:27):
We have 608 and 708 closed captions on our products. And then the new product we can do other
things like SMPTE time codes. The configuration is just through a browser, but the nice thing there is it
doesn’t have to be a browser just on your local network. If you’re behind a firewall, you could set up an
SSH tunnel with our boxes, and you can literally get to any box without having to poke holes through the
firewall, because they could contact out to a server that you could set up outside of the firewall, and
using SSH tunneling can get to the web interface of boxes that are behind firewalls. But we do build
them to be really secure against hackers because we build these boxes with built-in white list based
firewall. So you can put the boxes directly on the public internet, and we have a lot of security built into
the boxes to prevent hacking.
Jim Jachetta (00:56:28):
Very cool.
Ronald D. Fellman (00:56:32):
So talking specifically about our newest product, the QVCodek4K, as I mentioned it does ultra high
definition video, it can do that with HDR or high dynamic range for the color. So you get the beautiful
colors, the larger color space, the larger range of between bright and dark of the colors. It uses HEVC
encoding, you could also encode it in the old H.264 but does H.265 which is also known as HEVC, which
in many cases you can only require about half the bandwidth of HEVC, especially if you’re doing 4K
video, it’s much more efficient.
Ronald D. Fellman (00:57:21):
The HDR, it has a capability of sending the HDR metadata, encoding that into the encoder. One of the
newer techniques that we’ve used in these boxes is security against hackers is to disable logging in with
a login and password. We have the ability in these units so that you just can log in, the only way you can
get access to the box is if you have a certificate, like an SSH certificate, installed in the box, and there’s
no other logins, there’s no username or password that anyone can try to guess, or hack. I mentionedJim
Jachetta (00:58:05):
I know with many of our customers, we work with a few studios, Paramount Studios, Viacom,
Nickelodeon, security is a top priority. You know, ever since Sony was hacked in Hollywood, all the
studios are so paranoid about eavesdropping or… Their worst fear is production content before a movie
is released or before a TV show is released. They’re moving video around between post-production and
production. If some of that leaked out, it could ruin a whole season of a show. And millions of dollars are
at stake, if not billions, if a movie gets hacked, right?
Ronald D. Fellman (00:58:50):
So this product was designed from the ground up to be really secure. I mean, there is encryption in
there, AES256, but in terms of anti-hacking, there’s not even the ability to log in as root, unless again we
can set it up that way, but when we ship the units out the root log-in is… Not only is root login disabled
and disabling of logging in with a username or password, but there’s no mechanism in the box for
anyone to load any software that’s not been authorized by QVidium. We have our own servers with
software that’s both been authenticated and encrypted. So nobody can take one of our units and try to
fool it by putting bogus software on there because our software is authenticated with certificates. It’s
also encrypted. So the bitstreams that go into the FPGA is the boot loader. They’re both encrypted and
authenticated. So these units are hack-proof and they’re hack-proof with a battery back key.
Ronald D. Fellman (01:00:02):
In other words, if somebody tries to go into the unit and try to figure out what is the key inside the
thing, as soon as they disconnected the battery and took the board out of the box, there’s a small
lithium cell in there, which the key disappears. It’s lost, and they would have no access to it. And there’s
no access, even if they were trying to poke some pins on it, so you can get into it to look at the bitstream
or any of the bits floating around for the key. So it’s extremely secure. Built again from the ground up in
the lowest levels, to the highest levels, both the security, authentication, encryption, but it is built… And
we use our standard RPM packages. We have a server. So as I mentioned, you can’t even put bogus or
hacker software on there.
Ronald D. Fellman (01:01:02):
There’s a button that makes it really easy to upgrade this box. If you want to upgrade the box, there’s
one button that says upgrade, you do it on your time, it’s not going to interrupt anything that’s going on
in the box. So if you press that one button it goes to our server, if anything needs upgrading it does it
automatically. There’s nothing to download and there’s nothing you can download, except what’s
authorized by us. We have modular I/O Daughter cards. So the basic unit comes with 12G SDI port on it
and an SFP so you can put a second port, say optical on there. You could also put a second ethernet
port, so that it could also be optical back. And you can have the ability to go up to 10 gigabit ethernet.
Ronald D. Fellman (01:01:51):
In addition to the regular ethernet port, there’s a USB port now for console if we needed to get in there,
for some reason. If you wanted us to help with something at a low level. There’s also another USB 3 port
for adding additional disc drives or disc space, if you wanted to record video, and have it stored on the
unit, or play back video that was stored on the unit. So it has these recording and playback features, in
addition to being able to do multiple I/Os. We have a POD I/O card so that you can use the unit literally
as four completely independent encoders and decoders. So for example, you can set up two or three of
the ports of the 3G… This card has 4 3G-SDI ports. So you can have say one or two of them be encoders
and the others be decoders. You get in any mix, any combination you want of encoders and decoders.
Ronald D. Fellman (01:02:53):
So this is I think a fairly neat feature that if you want to take this to a stadium, you can have three
cameras connected to it where it’s doing three encodes of the three cameras on the field, and then have
a return feed from the studio on the fourth board, and it’s very low latency. I didn’t mention here, but
the latency is around 10th of a second. So having the low latency and the bi-directional operation, it’s
like having a very high-quality video conferencing system. [crosstalk 01:03:26]
Jim Jachetta (01:03:26):
Yeah. The B and Cs are our input outputs. You define whether it’s an in or an out, a decode or encode via
the software? It’s ambidextrous.
Ronald D. Fellman (01:03:35):
Exactly.
Jim Jachetta (01:03:36):
That’s awesome.
Ronald D. Fellman (01:03:38):
Absolutely. Yeah. And we do that with all the ports, even the 12G-SDI port or the SLPs, they’re all
designed to be either input or outputs, and so you can… In fact, we have one customer wants to use it
for two 4K 30 feeds where he wants, I think in that case he wants them both inputs, but you can do
input and output at the same time. Or use one of the SFPs, let’s say for output or you can actually have
an SFP with two miniature SDI ports on there and have it be bi-directional.
Ronald D. Fellman (01:04:17):
Now I mentioned about our QVidium patented ARQ. I mean, if you ask me, I think it’s the best ARQ out
there, but now there’s a lot of other people doing ARQ.
Jim Jachetta (01:04:26):
Are you biased, Ron?
Ronald D. Fellman (01:04:28):
I might be, I might be, but we do have the patent that, so the patent [inaudible 01:04:31]
Jim Jachetta (01:04:30):
Yeah, if you invented it, I think you know what you’re doing.
Ronald D. Fellman (01:04:36):
And our patent protects it in terms of being able to do things that no one else is going to be able to do
without infringing on the patent.
Jim Jachetta (01:04:46):
Correct. Other ARQs, unless they’re licensing something from you, they have to go around-
Ronald D. Fellman (01:04:51):
Exactly, exactly. You know, but we do offer RISTs, and we’re we’re members of the RIST committee. We
have both main profile and simple profile RISTs. I think we were the first ones to actually have a main
profile implementation of RIST and demonstrate that with multiple interrupt demos with the RIST. We
have SRT on there. Not only do we have Zixi on there, we’re actually by the way own founders stock in
Zixi, and so the founding of Zixi had a little bit to do with QVidium. And the fact that we helped get it
going. And it, like I said has founder’s stock. Also our units were ZEN Master certified, and Zixi certified
for just interoperability with their server for our encoder. We can also do MPEG-DASH, RTMP and HLS.
So you could hook these things up and send streams to or from CDNs with that. I want to say RTMP
[crosstalk 01:05:55]
Jim Jachetta (01:05:57):
You bring up a good point that comes up a lot, Ron. When you say this is QVARQ or QVidium ARQ is
patented, that might scare certain people off. It’s like, “Well, I don’t want to be boxed in and then not
have any interoperability.” You consider interoperability a very important part of your offering. You’re
doing interrupts at a video services forum and at NAB, right? I guess the point is, what do you do? You
turn off the ARQ and use the open source RIST instead?
Ronald D. Fellman (01:06:37):
Yeah. There’s a menu item that lets you select for the IP transport. And you can select between UDP,
which is just the simplest thing, or RIST, SRT, Zixi, ARQ, RTMP, HLS. I mean, you just have a laundry list of
all these things that you could select. And by the way, the RTMP, we also have RTMPS so you could work
with Facebook and have it secure and encrypted. So we’re fully interoperable. We want to be
interoperable with everybody else’s gear. If you want to do HEVC HD video, or even just standard high
def video, or anything you want, these boxes are fully interoperable.
Jim Jachetta (01:07:27):
Very good. Very good. Yeah. I think, is there one more slide, or no, there’s a few more.
Ronald D. Fellman (01:07:35):
So, that’s what the box looks like.
Jim Jachetta (01:07:37):
So yeah a picture’s a thousand words. You can see these units are really well-built, really great
manufacturing, very solid, very robust. Maybe tell us what we see here, Ron.
Ronald D. Fellman (01:07:48):
So the front, you see the fan of course, and the power light. There’s not much to talk about on the top
photo which is the front. But the bottom photo really shows you everything. And we got this obviously
tested for CE and FCC certification. So all those little symbols on the side that you see there, it’s perfectly
compliant with the latest safety standard, for if you’re going to send these things out to Europe. We
even have a special sticker if you’re going to send it to Norwegian or Scandinavian countries that they
required. So, it’s fully compliant with both the safety and EMC standards. And you could see the power
jack’s on the left. So going from left to right, there’s a micro SD slot where you can store video or play
back video. We have the two SFP connectors. One is designed for a second ethernet port, which could
be optical. It could even go up to 10 gigabit, although right now the standard configuration is for one
gigabit, if you plug an SFP in there. We have the two [pro T-SDI 01:09:02]…
Ronald D. Fellman (01:09:02):
We have the two Pro T SDI ports on here. One is we get through the SOP and the other is a co-ax.
They’re both bi-directional. We have a small gen lock input that we don’t have that feature implemented
just yet, but the hardware is all there. If you needed hardware gen lock, we’re planning on having that
down the road if there’s demand for it. I haven’t had a lot of demand yet, but it’s there.
Ronald D. Fellman (01:09:32):
There’s display port for output. Another USB port for storing video or playing back video. As an
alternative to the microwave, HTN is an alternative to a lot of flash memory inside the unit. That small
console cord, that’s a great for hooking up, it just gives you access into the low level Linux console.
Again, we’ve blocked out route. So if you want to use that it’s even secure in the sense of somebody
getting access to the box and trying to plug into the box. There’s a standard ethernet port with some
indicator lights, and then we have the Daughter-Card that sits on top of it.
Ronald D. Fellman (01:10:17):
The dotter card we have shown here is the quad IO card which probably going to be the most popular
one that can are the four ports we talked about where each one of those three GSDI ports can be either
input or output. If you happen to have a camera or a system that needs to sample Interleaf as a way of
inputting 4K video, we can do that. We have some other Daughter-Cards, one for input, and one for
output for to sample Interleaf. So you can do a 4K with quad 3G ports.
Jim Jachetta (01:10:54):
Well, it’s to me, this is when you have a smaller box like this, customers will be like, “Oh, well I really
want a redundant power,” and you have that. A power supply number one is AC and then you could
have an external, you could feed a DC directly or have an AC to DC converter box to give it a dual
redundant power to make this a real enterprise and robust and reliable solution. Correct?
Ronald D. Fellman (01:11:30):
Right. I do want to point out, as it says in the upper right side of the lower photo, this box is made in the
U.S. which has implications if you’re going to be sending to government agencies, for example, there is
requirements for that. It also helps in terms of the supply chain of being able to get these things out to
you. If you place an order, we try to always keep up some units, a good number of units and stock. So
we’ve minimized supply chain problems as well.
Jim Jachetta (01:12:03):
It’s very good. Very good. Now, Vidovation’s colors, I don’t know if you realize we’re on blue and
different shades of blue, dark blue. I like the color blue. It’s nice.
Ronald D. Fellman (01:12:14):
Yeah, well you could [crosstalk 00:03:16]. So there’s some specs on the unit. Glass to glass, 120
milliseconds. We were at a recent conference, a vid trans conference showing that off. I think we had
the lowest latency demo of encoding/decoding there. We were also doing at the same time, it was
sending to another unit, we were sending a risk stream over to the inner op. In order to help get the
really low latency and we can actually do lower than that. We just haven’t implemented some of the
features because [inaudible 01:12:55] in media, but we’re able to do what’s called gradual decoder
refresh where instead of sending and encoding a whole I-frame at one big chunk, it does it in stripes.
You could select either horizontal or vertical stripes for them, gets the latency way down on the
encoding. So most of that latency actually, when I specify 120, that’s mostly on the decoder, the
encoder latency is just a fraction of that.
Jim Jachetta (01:13:22):
Well, you often see in transmission a big spike up when the I-frame goes, and then you see other little
smaller spikes, I guess. What do you send? I-frames, B-frames, A-frames? What is your [crosstalk
01:13:36] look like?
Ronald D. Fellman (01:13:38):
So we’re doing the same thing. We have I-frames, B-frames, and P-frames or I, P, and B. But the I-frame
is broken up into these little stripes. So you don’t get this big burst of packets when you’re doing a new
I-frame because it’s done in these stripes. The bursting ness is much more striped.
Jim Jachetta (01:13:56):
You spread it out a little bit so you don’t put it… If your connection is a little, if you’re pushing it to the
maximum, you don’t get a glitch.
Ronald D. Fellman (01:14:04):
Right.
Jim Jachetta (01:14:05):
You don’t want the ARQ to kick in unnecessarily. So that helps smooth out the transmission?
Ronald D. Fellman (01:14:10):
Yeah, absolutely. So it does two things: it smooths out the transmission, as you said, and then it also
gives you much lower latency.
Jim Jachetta (01:14:18):
Interesting [crosstalk 00:05:21].
Ronald D. Fellman (01:14:20):
So, as I mentioned, we can do both 264 or HEVC. The box does such a good job with HEVC. There’s really
almost no reason to use an H264. It’s more for backward compatibility because it’s about half the
bandwidth.
Jim Jachetta (01:14:35):
Right.
Ronald D. Fellman (01:14:37):
The same box can be set up to do 422 or 420, it can be used set up for eight bit or 10 bit. Well, I guess it
didn’t put that on the slide, but it does do either. For HDR, you really need 10 bit video encoding, and
that’s what this box can do. But you could set it, if you want to be backward compatible with older
things, you could set it for 4208 bit encoding instead. But if you really want good contribution quality,
you could set it for 422 10 bit and throw in the HDR metadata. Talk about the color spaces. It could go
up to the rec 2020, which is for HDR. It could do HDR and not just on 4K, but HDR makes a big
improvement even with regular HD video.
Jim Jachetta (01:15:27):
Oh, I didn’t know. You only hear of HDR in reference to 4K. So you can add HDR metadata to HD? I
wasn’t aware of that.
Ronald D. Fellman (01:15:38):
Yes. Yeah, you can. Absolutely.
Jim Jachetta (01:15:41):
Okay. Then if you have a television that supports HDR, if it drops down to HD, the metadata will be put
in there?
Ronald D. Fellman (01:15:50):
Yeah. So what happens is HDR, if you send HDR stream to a non HDR compliant receiver, you still get the
video. It just, the colors look a little washed out without adding the metadata. You could also add the
metadata by hand on some equipment. For example, we have an HAA converter that handles HDR, but
since it didn’t handle it in the metadata, there’s a user interface on it so that you can add in the
metadata by hand. Most HDRs compliant streams nowadays are, it’s just a static template of metadata
for the whole program. But there is also the ability to do dynamic metadata where it’s different for each
frame.
Jim Jachetta (01:16:40):
Gotcha. Gotcha. All right. Are you ready for the next slide?
Ronald D. Fellman (01:16:44):
Yeah. I am. So I think I talked about most of this. The box can do file recording or playback. I mentioned
the ports on there before the bod and 3G ports. It has a display port. If you want to monitor the output,
has that ability, as we’ve talked before about using it as a quad encoder/decoder, each one is
independent. The audio I didn’t mention before, this box, unlike earlier generation boxes, has eight
channels of embedded audio. In other words, four stereo pairs. If you use the quad encoder/decoder
capability, you’re talking about 32 channels total of audio in this one box. Talked about the [crosstalk
00:08:35].
Jim Jachetta (01:17:36):
I’ve learned, I’m sure you’ve learned this too, even though we’re in the video business, you can never
have enough audio. The audio, four to one, eight to one. So, eight channels of audio for each video,
that’s impressive. That’s good. People will, people will use it.
Ronald D. Fellman (01:17:57):
Another feature that we spent a lot of time with is processing the ancillary data in the SDI. So it can
handle closed captions, which is something people aren’t doing much with 4K yet, but it has that ability
in there. But we also have the ability to process simply time codes, which could be really useful if you
want to do a synchronization amongst multiple encoders and decoders. We think that could be really
useful for that. Or putting an ad insertion markers.
Jim Jachetta (01:18:31):
Yeah. For distribution, that can be important. There’s a SMPTE standard around that. I don’t remember
the SMPTE number, but that came up with a customer not too long ago, needing the ad insertion
markers or the metadata for that.
Ronald D. Fellman (01:18:49):
So just to be really clear on that, some of that is not implemented yet. It has the ability if we had a
customer request for it, it’s something we would put a lot of time into and it wouldn’t take too much
time to get done.
Jim Jachetta (01:19:04):
Okay. Okay. All right. Let me see.
Ronald D. Fellman (01:19:11):
I mentioned [crosstalk 00:10:11].
Jim Jachetta (01:19:12):
Security before.
Ronald D. Fellman (01:19:14):
We touched on security here. Here it is in bullet points. Secure against malware, it basically locks out
any foreign software from being able to be downloaded into the box. Disabled root access, disable a
password, a login with username and password. It has the ability to equip the streams. Whitelist based
firewall built into the box and SSL support for if you’re doing web access like doing HTTPS, which
encrypts the web access.
Jim Jachetta (01:19:48):
Very cool. Very cool.
Ronald D. Fellman (01:19:53):
So I talked about some of this, the one touch update. There’s one button you press. If it needs to be
updated, it does it on your schedule and it does whatever it needs to do automatically. That’s the only
way of getting updates in there. You can’t get a hacker ware software in there. It’s done using standard
RPM, which is the technology has been in Linux systems for a long time now. I like the fact that it’s
secure.
Ronald D. Fellman (01:20:25):
It keeps track of the versioning of different features so that you’re not getting the wrong update. The
ability to do the tunneling through firewalls. This is equivalent to what [Zysky 01:20:42] talks about with
their Zen master certification, which we have. But we also have our own mechanism. In addition, that
allows you to get access to the box by tunneling through the firewall where the box is where you set up
your own SSH server. I talked about again, putting in the public private T’s and the encrypted
communications, SSH access. QVidium does have a network management system. So if you buy a lot of
these boxes, you can have a server that you can set up that could monitor multiple boxes all at one
glance, see which slides are up down.
Jim Jachetta (01:21:29):
So let me ask you, Ron. So when you say you tunnel through the firewall, what are you doing? You
coming in and out of port 80, or something? You’re using one of the ports that’s already opened to
communicate?
Ronald D. Fellman (01:21:41):
It’s actually doing SSH. Okay. So, the communication isn’t even done through port 80 anymore, it’s going
through the SSH board, which you could configure to be… SSH is normally port 22, but you can set it to
be something else to be a little more secure. You could set is say 2022, and it could contact an SSH
tunneling server.
Ronald D. Fellman (01:22:07):
Basically you set up the box with the IP address or the DNS name of the server you’re going to use. It’s
the box, the QVidium Kodak box is sitting behind the firewall on some location, it’s going to send packets
out upstream through at the internet, over to the server. Now, most firewalls will have packets out.
They just block packets from coming in. Now, there are some firewalls that will block both directions,
and obviously you’re going to have to work with the IT department if that’s the case, I’ve seen that in
some hospitals.
Ronald D. Fellman (01:22:43):
But most places, if they allow outbound packets, you plop the box down, you don’t even have to
configure an IP address on the box. You can configure, or pre-configure the box for DHCP access and you
can pre-configure the box with the IP address of the [inaudible 01:23:07] server. So you can take a box
that has been pre-configured like the, for DHCP and with the tunneling turned on, send it out to a client,
they can plug it in and they don’t have to do anything. You just-
Jim Jachetta (01:23:22):
Then you go into the cloud and then find it through the SS, and you’ll find it automatically through the
SSH server?
Ronald D. Fellman (01:23:29):
Right. Right. It automatically is going to set up its IP configuration with the DHCP stuff and it
automatically contacts the server. You know what port you’ve assigned to it on the server. It’s
encrypted. You can go into it, no one else can. Again, and you have access to that unit, even though it’s
sitting behind a firewall and never have to do anything.
Jim Jachetta (01:23:56):
That’s very cool. That’s very cool.
Ronald D. Fellman (01:24:02):
Oh boy.
Jim Jachetta (01:24:02):
It’s miscellaneous.
Ronald D. Fellman (01:24:04):
So I talked about all of these connectors on the box I think. Instead of having the old faction RS-232
console port it’s useful, you have a troubleshooting session, have the console port has the ability. With
the SFPs gives you a lot more flexibility. You can do a redundancy with the Ethernet because of the SFPs
on there and the Gigabit port. [crosstalk 00:15: 33].
Jim Jachetta (01:24:33):
What are you doing with redundancy and bonding? What exactly does that mean? What are you
implemented there?
Ronald D. Fellman (01:24:40):
Well, there is a capability. I mean, if you had, let’s say two different Ethernet links going out two
different modems, and you have limited bandwidth on each modem, you could bond two links together.
That’s what bonding is about. We haven’t had too many people needing that, but the box is certainly
capable of that.
Ronald D. Fellman (01:25:06):
It would be [inaudible 01:25:08] software we’d have to add to the box, but bonding is something we’ve
done in the past. For example, we have the technology in our older box. We just haven’t had many
people needing to do that. Not with higher speed links.
Jim Jachetta (01:25:25):
Okay. Okay. [crosstalk 01:25:27] the Daughter-Card you mentioned.
Ronald D. Fellman (01:25:32):
Yeah. In the future, we are going to consider, if somebody needs, for example, QAM Modulator built
into the box, we’ve had a few people thinking that, that might be a useful feature. Because we have an
FPGA that the thing is built around, there’s a lot of flexibility. For example, right now it’s doing HEVC and
H264 encoding, because it’s built around an FPGA, we can actually add something like a SMPE 2110,
which is a light compression of raw Ethernet and that rawness [inaudible 00:01:26:10].
Jim Jachetta (01:26:14):
So that would be something you’d implement with the Daughter board?
Ronald D. Fellman (01:26:19):
Well, actually it’s probably the wrong slide to talk about it. We could do it with, or without the Daughter
board. It would be something you can actually put into the FPGA.
Jim Jachetta (01:26:27):
Okay. Okay. The QAM Modulator, you’d put that on the encoder, what to drive a cable TV system?
Ronald D. Fellman (01:26:33):
Yeah. That would be something you put on a Daughter-Card so that you would have an RF Interface on
the Daughter-Card so that it would, when you encoded the video to go directly out as RF. So in
conclusion, we try to keep, right now we have stock and our business model is to always have stock of
product. If you buy the box, there’s going to be tech support included in the purchase. It comes with a
one year warranty. All the software Firmware updates are free. There is a Rack-Mount kit available. As I
mentioned, you can put two units in the 1-RU slot or one unit in the 1-RU slot. We have those Rack-
Mount options available. We’re a small company. We’re always open to feedback and suggestion.
Jim Jachetta (01:27:29):
Well, yes, we have innovation too. We love talking to our customers. Some of our greatest inventions
come from our customer’s ideas. So your invention of ARQ came from the telemedicine and then Arnold
Schwarzenegger demanded that you make it work, “You make it work now. Now it will work.”
Jim Jachetta (01:27:53):
So thank you so much, Ron. I hope our technical glitch with the questions didn’t impede anyone’s
questions. I’m sorry. I actually meant to put a contact slide for Vidovation in here. I forgot to do that. I’m
sorry. If you have any questions, you can call Vidovation at (949) 777-5435. You can email sales at
vidovation.com. That’s V-I-D-O-V-A-T-I-O-N. People have a tendency to want to put a knee in vid, or
videovation. It’s Vidovation. We’d love to hear your questions, your feedback, your comments. You can
text me at (516) 551-3201 if you have any questions. Looks like that’s it, Ron. That was a good 90
minutes of knowledge you laid on us there.
Jim Jachetta (01:28:53):
I think I’m going to need some brain food for lunch today to get those neurons going and keep, absorb
all that knowledge. Thank you so much. If anyone’s interested in this technology, please reach out to
Vidovation. Visit our website at vidovation.com. V-I-D-O-V-A-T-I-O-N.com. Or give us a call at (949) 777-
5435. Thank you, everyone. Be safe out there. Be healthy. We hope to see you soon at some point.
Hopefully, we’ll have a trade show at some point in the future, Ron. But until then we’ll work virtually.
Thank you, Ron. Thank you for participating today. We appreciate your expertise and thank you for
sharing.
Ronald D. Fellman (01:29:43):
Well, thank you. It’s a pleasure and thank you for having me on.
Jim Jachetta (01:29:47):
Thanks Ron. Have a good rest of your day. Take care.
Ronald D. Fellman (01:29:49):
Thank you. You too. Take care.
Podcast: Play in new window | Download (Duration: 1:30:12 — 82.6MB) | Embed
Subscribe: Google Podcasts | Email | RSS
Podcast (video): Play in new window | Download (Duration: 1:30:12 — 1.4GB) | Embed
Subscribe: Google Podcasts | Email | RSS