1035 N Armando Street, Suite V, Anaheim, CA 92806  info@vidovation.com 1-949-777-5435
0 Items
Select Page

Best of Live Production at NAB from VISLINK Mobile Viewpoint, Part 2 [Recording Download]

Published on May 23, 2022 | News, Podcast, Featured, Webinars

YouTube player

Leveraging AI-driven production tools to boost viewer engagement for sub-tier-1 sports programming

  • Remote directing technology for scripted and produced productions

  • AI camera systems that eliminate the need for camera operators

  • Creating new content revenue streams through AI-driven live production

  • Fully-automated and multi-camera sports production

Register to Download the Presentation & Watch the Recording

Please download and watch all three parts of or Best of Live Production at NAB Series

  1. Best of Live Production at NAB from Haivision AVIWEST, Part 1 [Recording Download]

  2. Best of Live Production at NAB from VISLINK Mobile Viewpoint, Part 2 [Recording Download]

  3. Best of Live Production at NAB from MultiDyne, Part 3 [Recording Download]

Transcript

Best of Live Production at NAB from Vislink Mobile Viewpoint

Jim Jachetta:

Good morning everyone. I’m Jim Jachetta, CTO and co-founder of VidOvation. And I’m very pleased to have David Edwards again from Vislink Mobile Viewpoint as a guest. Welcome David, how are you?

David Edwards:

I’m great. Thank you Jim. It’s good to be talking to you again and good to be on these webinars and talking about all of the exciting things that happened at the NAB Show this year. Now that NAB is back. Not All back.

Jim Jachetta:

Yes. Yes. Yeah. I think we were mentioning before we started, some people obviously couldn’t make it. Their company wasn’t comfortable sending them or they weren’t comfortable coming due to health concerns. But I think the majority of people were just busy. There was a year or two backlog of work, someone’s busy building a studio, someone just started a new job, someone’s in the middle of a lengthy production. So that these are good things, right?

David Edwards:

Absolutely.

Jim Jachetta:

That we all need to work. And there’s also a shortage of workers too now. That’s a big struggle as things wind up. And you’re going to share with us today some technology that eliminates the need for camera operates, et cetera.

David Edwards:

That’s right. And it’s the trends that’s been happening in the broadcast industry for quite a few years now. Finding new ways to do more things without necessarily increasing any significant costs. So potentially opens up new revenue paths for people to make use of that content and new business for people who are providing those content streams. And so, that’s where Vislink and Mobile Viewpoint are looking to enable people to achieve more without necessary costing the earth.

Jim Jachetta:

Right. Right. We’ve talked about this before too, that maybe for your tier 1 production you still do the more traditional workflow camera operators, production trucks on site. But tier 2, 3, 4, high school sports or any kind of lower tier live event in the past there wasn’t a budget for it. They couldn’t afford a truck, couldn’t afford all these operators. So I think there’s such a demand. Every studio now has a streaming platform. Now the scramble is to create content for these platforms and in particular live content.

David Edwards:

That’s right. And people will consume content if it’s there and if it’s compelling. And I think really the task is we know that people have broadcast rights to lots and lots of content. Maybe there have been challenges in the past in making that content affordable to produce. And then the challenges within the budget you’ve got, how do you make that content compelling? And if you can achieve those things, then potentially there is a much larger revenue stream to be had there for the taking. And so, today perhaps we’ll talk about some of those tools that we can deliver to people to make compelling content, at where we are thinking beyond their tier 1, tent pole budget which of course everyone is very familiar with and will remain with us because that is truly compelling content and the appointment to view to television. But that’s stinker at differently at different categories.

Jim Jachetta:

Very good. Very good. So shall I start the video here?

David Edwards:

Yes. [inaudible]. So before we just click onto that next one, we’re going to look at a number of Mobile Viewpoint products today. And the first one that we’ll talk about, because we’ll show you a video and that start immediately Jim presses the button, is a way to get content remotely for maybe from a bureau, maybe from interviews and achieve that all from a central location, manage it from a central location. So Jim next slide. Please run VT.

David Edwards:

Okay. So that was an introduction to a product called Trolley Live. And it has, as you saw there, has the ability to do many things. It’s been great in the COVID world where people have deployed those units into locations and done two-way interviews without actually, obviously, having to have many people in the room. And since COVID, as we’ve moved out of that, it’s been finding lots of interesting applications in, for example, remote bureaus, two-way interviews with people clearly a step up from Zoom interviews. But really where it has been in good is in the broadcast scenario where you need to manage all of that connectivity from the central location with someone who is skilled in the art of broadcasting and not having to put any setup problems onto the onscreen talent. And so, any general person can use this equipment, any person, a journalist has the ability to do that piece of camera and transmit their video, their piece of camera on wherever they are over bonded cellular connection, over wide ethernet, over wifi connectivity so you really can use it anywhere.

David Edwards:

What we’re showing here in this picture is that at the system’s heart, the operating system or the control system that surrounds this is a product called LinkMatrix. And this enables you to operate that equipment remotely, to decide when to bring that equipment live, schedule it if you want to, extract clips, recording clips, add graphic overlays and publish it either onto a live channel or onto a social media channel or send it off for storage to be used at a later date. So as a two-way interview device, it’s a really unique and novel application of a portable way to collect high quality interview content. Maybe from a remote bureau for example. So what we’ll do now Jim, is if we can skip onto the next slide and you can see how this was used in action at the Beijing Winter Olympics in the Athletes’ Village. People could do a piece to camera there.

David Edwards:

Okay. So that short clip was from the Polish team at the Winter Olympics. And within the Olympics, the Athletes’ Village, it was tightly in COVID lockdown. Only certain people could go in and out. Broadcast teams couldn’t go in out. And so, this provided a really neat method of the athletes there to do their interviews to the home audience and discuss how they’re done in their sport. So a good example how you can use these devices as a pop-up interview location with easy connectivity out of the event. So that made a fantastic impression on a lot of the Vislink and Mobile Viewpoint customers at NAB. So this is a 20-minute super session. [inaudible].

Jim Jachetta:

Yes. Yes.

David Edwards:

So we’ll move onto the next slide if we can Jim. And talk about some new technology using artificial intelligence to do new things. And here we’re going to talk about artificial intelligence to do studio production and to automatically select the camera shots and to produce that event without any major human interactions. So if we can skip to the next slide and play the next one.

Speaker 4:

Imagine a broadcast studio that does not require a production crew to create a professional production through the power of artificial intelligence. This is now a reality. Meet vPilot by Mobile Viewpoint. With vPilot you can create a professional production that is easy, fast and affordable. Engaging with your audience through broadcasting has never been so effortless. vPilot offers a fully automated multi-camera studio with artificial intelligence controlling the cameras and the shot selection. You don’t need any behind the scenes crew. With the studio management tool, you can easily set up preferences for the virtual director, such as shot types and lengths. The vPanel gives the talent flexibility at their fingertips, allowing them to add graphics, inserts or cut to an external camera when they see fit. Live streaming and online publication is fast and easy. You can stream to any video platform you want. So start broadcasting with the power of AI with vPilot by Mobile Viewpoint.

David Edwards:

That’s really great. And so, that artificial intelligence or AI-driven automatic studio production system enables you potentially to do new things. We talked at the beginning about new content. Could you, for example, use this in remote bureaus again? For new studio applications? In that video it was set up at a soccer stadium and people were able to do discussions at half time of the match or the end of the match. And that’s really a way of getting new content out to viewers without it costing a significant amount. You don’t need a production team on site to run this. And so, what the AI there does, is the AI engine decides which shots to bring to the viewer. Does it choose a wide shot, does it do a close up of one presenter? And the way that it determines that is that there are 3D sensors on top of the camera and that detects the people where they are in the room.

David Edwards:

If a presenter needs to or is inclined to move about as they talk, then the cameras will track them. If one particular person starts talking, the AI engine may decide to switch to a close up of that person. If people are having a two-way conversation, it may decide to give you a wide shot. And so, all of that enables you to produce and direct the event in a similar way to a human camera team and director would. Let’s look at an example of some of the content from that. So we can skip to the next slide. Jim that would be great.

David Edwards:

So this is basically a soccer discussion between two pundits here.

Speaker 5:

[foreign language].

David Edwards:

This gentleman is talking about reviewing that match. Excited a close up on that particular person because he’s doing lot of the talking. And then the AI system, the AI engine makes a different choice.

Speaker 6:

[foreign language].

David Edwards:

And it’s decided to call a wide shot there. And here you can see how we inserted a brief clip into that. And I think actually the video’s gone round again. But before we skip to the next slide Jim, that system you see, it has a certain pace to it. That’s a sort of relaxed feel in terms of the production. You can configure the system with a profile to have a similar relaxed feel or maybe if you’re using the system, maybe for visual radio or visual podcast, you can give the system much more pace. So it’s cut, cut, cut. As people talk back and forth. And that’s up to you to tune the system for the particular type of production that you want to.

Jim Jachetta:

We can go a little longer than 20 minutes David. So don’t feel like you’ve got to rush. My colleague Rick and I had the pleasure of spending more than two hours in the visiting Mobile Viewpoint booth before the show started. And it’s quite impressive. David doesn’t have a screen of it now but we can, if you want to do a demo, any customer wants to see it behind the scenes look, there’s an admin screen in the AI and it follows the speaker’s nose, right? The sensor, is it infrared? It finds the face facial features from the eyes and the mouth?

David Edwards:

That’s absolutely correct. So it’s not… So for example, if you use that studio for different presenters and maybe someone is very tall and someone is short, it will find the person within that 3D space and adjust the cameras accordingly. And as I said, if someone needs to or inclined to move about so they talk maybe pace, walk up and down within the studio, the cameras can track them. In terms of using the AI engine, if someone decides that what they’re going to do is they’re going to turn their head and give an earshot to the camera. That’s not a particularly interesting shot. And so, the eye engine will detect this is not… Seeing a different view of a person and may choose a different shot. Maybe a wide shot so that you’re not looking down [inaudible]. So it’s a really intelligent system in terms of choosing which of the cameras views to deliver

Jim Jachetta:

The AI engine like David said, I was pretending to be the anchor and I was turning my head but despite me turning my head there, the little dot the AI knew where my nose was at all times. So it could find me in the frame. So it’s not using video recognition. It’s using three dimensional sensors to define the speakers that’s very cool.

David Edwards:

To find the speakers but a whole combination of things to drive the AI’s decision choices. So where that person is within the 3D space of this studio setup, where they are looking within the studio, are they looking down the barrel for particular can, that will be one choice. And audio is of course another clue as well. Because if someone is speaking chance are you probably want a shot of that person as well. So various things that the AI engine is changed to use to make the camera shot decision.

Jim Jachetta:

But you touched on it a little bit. Maybe it go a little deeper. That it is automatic but you have controls. You can set threshold. So in other words, you and I are talking and I just go, “I agree with you.” You don’t want to cut to me. I’m just concurring. So that you can have a threshold, only cut to Jim if he speaks for more than 500 milliseconds.

David Edwards:

Exactly.

Jim Jachetta:

Or if we’re talking over each other, we’re having a banter, then it knows, “Okay. I hear audio on all the channels. I’m going to go wide.” Right? So those types of this… You can guide the AI, I guess, is my point, right?

David Edwards:

You can guide the AI and there are other things you can do in terms of user input as well. So the system. You might have seen there was a control panel in that video that we watched a little earlier. So the user there, the operator there can choose to insert a video clip. The operator can trigger to have some onscreen graphics, maybe a lower third’s graphics that you can put in. And you can also configure the system for the user to do a manual camera override. So if the program demands, you can press a button and you can switch to a wide or you can switch to a close up of one of the presenters. So you have all of that flexibility as well. But of course we are talking about offloading a lot of the work. And so, at the heart, you would in most situations use the AI engine to [inaudible].

Jim Jachetta:

Let it do its thing. But the anchor could have a confidence monitor in their eye line and they don’t… They want to override for some reason the AI’s decision to go to the guests. They want to bring it back to them. They can override with the button if they want?

David Edwards:

They can do that. And you talk about confidence monitors. That’s exactly how we set up our system. So you have a monitor so you can see what is going out to where and you can actually have a second monitor so you can see the previous shot, the next shot that the system is going to choose.

Jim Jachetta:

Very cool.

David Edwards:

If the system is queuing up another shot, you may want to go and look into one of the other cameras [inaudible].

Jim Jachetta:

To get ready. So you’ll see… Yeah. On the preview monitor. You’ll see, “Wow, it’s the side of my face. Let me look into that. Oh, that’s cool. I didn’t even think of that. That’s great.” So shall we move on to AI sports?

David Edwards:

Let’s move on to another AI system because AI has the capability to do many different things for us. And so, we’ve produced an AI-driven system for sports production. So yes. Next slide. So just as an introduction, we are now thinking about sports’ games. Yes. I’m showing a picture of soccer here but the system is tune for many different sports as well. And we’ll talk about those in a moment. So let’s skip to the next side. And [inaudible].

Speaker 7:

Televised sports production has been the same for a long time but now it’s changing. Meet IQ Sports Producer from IQ Video Solutions. With the power of artificial intelligence, professional productions can now be created without an onsite camera crew. And thus at a fraction of the cost. The AI has a high level of automation. It tracks both the ball and the players. It overlays the scoreboard and graphics and it creates highlights and replays. High quality, easy and affordable sports production has never been so easy. Tap into new possibilities with IQ Sports Producer from IQ Video Solutions.

David Edwards:

So I think this is one of the really coolest things that I saw at NAB and had the pleasure to talk about it at the show. So one of the questions that I got quite a lot [inaudible]. This technology is, “Is this just a ball tracking system?” And no, it’s more than that. So if we can skip to that blue slide. If we can Jim, that’ll be a great way.

Speaker 8:

IQ Sports Producer. [inaudible].

Jim Jachetta:

I’ve never seen this. This is cool.

Speaker 8:

High quality 4K cameras to give superior image quality.

Jim Jachetta:

So it even knows what team the individual objects are on.

David Edwards:

That’s right. Knows the difference between the different teams, between the [inaudible] and the referee.

Speaker 8:

The video images [inaudible] directly engaging broadcast production with smooth pans and zooms just like a professional camera operator.

David Edwards:

It knows the difference between a pool and a penalty spots, for example.

Speaker 8:

[inaudible] engine can differentiate between opposing teams to give more accurate action tracking. The AI system disregards off pitch action to focus on showing active gameplay.

David Edwards:

So you can see that people are off the pitch aren’t interesting to the system.

Jim Jachetta:

Yeah. You saw it reacquire the ball. The ball disappeared. So it didn’t freak out that the ball disappeared. It stuck in that general area till the ball reappeared.

David Edwards:

That’s right. So one of the things that we’ve been looking at various technologies that are similar to this and one of the things that people have told us is some other systems, for example, in real game play, the ball can go off the pitch and it can come on maybe at a different place. And so, it is far more than a simple ball tracking system. So this enables… This system is smart enough to determine where the area of play is. It’s not just the ball likely that the area of play is ball plus a collection of players around a location. And because we use high quality cameras, it means that we can have the ability for the AI system to get in close to the action which is really what you need.

David Edwards:

So how do we achieve that? Well that perhaps if we skip onto the next slide and there’s a good example here. So you might have seen some pictures of one of the camera systems that we have. It’s a panoramic camera. It’s made out of four 4K cameras rotated into portrait because that best fits the geometry of the sport’s pitch. And that enables us to capture the full field of play. But then once we’ve detected or using our detection of where the action is taking place, we can then zoom into any area. Now because you have a panoramic image, you have to dewalk the image. But because we are using four 4K cameras, we can do a digital zoom in on any area of the pitch and present a flat image which is what you get out of a normal broadcast camera.

David Edwards:

So that’s what you’re seeing this demonstration here. You’re seeing the complete field of view. You’re seeing how the system has dewalked the image to give a flat image you see in the bottom right hand corner and you can see the area that is identified as the field of play. So that gives you an idea of some of the video processing and image processing technology as well as the AI engine which detects the area. So next slide please.

Jim Jachetta:

There’s one question David. This might be in future versions of the software. I know we’ve done a lot of work with the NHL. That tracking player statistics. So somebody’s asking, can this system… I know with hockey, I’m most familiar with hockey, I’m a hockey fan. That players come on and off the ice constantly. A substitution of lines in and out in and out. Soccer, football, same thing. Time on field, time on ice is an important statistic. So I guess theoretically, if the numbers on their Jersey are visible at some point in the future, you might be able to bring in player statistics like that?

David Edwards:

That’s right. And we have some statistical data at the moment. We can export positional data into software packages that drive their business through sports data processing. So we integrate with those systems. But yes. Clearly there is a very big business there across a whole of data in sport. And we do integrate with some of that. With that. Because yes we know [inaudible].

Jim Jachetta:

Yeah. Do you have an open API for this software? Could it integrate with other systems? I’m thinking like maybe a multi-vendor kind of a thing. Another company that’s doing stats to merged the two systems together.

David Edwards:

So we’ve done a number of integrations on the positional data. So I think the thing to do is if any of the people who are listening to this are interested in that, then then come and talk to us and we’ll find out whether or not the applications that you are thinking of using the ones we’ve already done in integration. With or perhaps there are some new applications where there could be really good business to be had. So come and talk to us about that. Because as you’re saying, Jim, there are a whole raft of opportunities that come out of data analysis.

Jim Jachetta:

The old school approach I know with hockey are like, guys do this in their semi-retirement. So they’ll be like 20 guys, they’re usually up in the nose bleeds of the stadium with binoculars. And they’re like, “Is that Cole Patava on ice?” The old guys can tell who’s got the puck by how they skate. They don’t even see the Jersey number or the length of the guy’s hair. So, all that’s great, right? If people physically can do that, but if we can automate some of that would be great, right?

David Edwards:

Yeah. Yeah. That’s, as you know, the ways of doing it are really good, aren’t they? Okay. You’ve you switched up slides are better. Better talk about this one. So some of the examples I’ve shown you so far have been a central lob camera located above the field of play. It doesn’t have to be soccer as in this example. But what really makes the production compelling and what you see in tier 1 event. So, the whole game of what we’re trying to do here is to give a similar experience you might have at a tier 1 game and you will know that one of the properties of some of the high tiers of sports is you have multiple cameras and multiple views that gets you different perspectives on the play.

David Edwards:

And so, where we have moved to is having a multi-camera system. And so, at NAB we introduced product called the stellar cam, which you see as the side cameras here, which we can also use as the center cameras as well if we choose. And that gives you the ability to produce a much more interesting perspective. And we’ll show you an example of that in a moment. In terms of the sports that we support, let us skip to the next slide, where I’ve got a list and more. If we can Jim.

Jim Jachetta:

Yeah. There we go.

David Edwards:

So some of the spots that we support soccer, rugby and rugby is very similar to American football. So we are working on that at the moment. Field hockey, basketball, floor ball, ice hockey you talk to Jim about ice hockey, and then some other types of sports. Fellow drum, bicycle racing around a track, dog racing around track. And we’re constantly looking at other sports as well. So there’s many sports that are in there that we have in our portfolio already. And more that we’re adding all of the time. So despite the soccer examples, think of the sports too. I think [inaudible].

Jim Jachetta:

Your colleagues were telling me, well you and your colleagues were telling me Mark Time and Michelle that, the way AI works, some of our listeners may know this already, you teach it, right? You feed it sample videos like soft running cars.

David Edwards:

Machine learning. Yeah.

Jim Jachetta:

Machine learning. You feed it videos of the car going down the road and you… “No don’t go right because there’s an object there.” And you teach it that’s bad. Same thing here that you’re doing rugby today but you’ll teach it the nuances of American NFL football. And, for example, soccer the ball is usually loose. In rugby or American football, when they’re running with the ball it could be hidden. But they’ll teach the AI… Like when somebody’s running with the ball they usually have it tucked under one arm, one arm is out blocking. So they’ll look for someone in this position on the field and that’s where the ball must be, right?

David Edwards:

Yes. Exactly that.

Jim Jachetta:

Or if he loses it, it’ll pan out if it’s not sure where the ball is.

David Edwards:

Yeah. So we haven’t just built a simple ball detector here. It’s much more than that. I showed you at the beginning how the system can determine the differences between two teams, differences between a team player and a referee, for example. And so, we haven’t just built a ball detector. So for other games where the ball is the subject of the game but it’s not always visible that doesn’t matter to our system. So yes. We train it with hours of content of the sport and we train it what to follow. There are some rules as well built into the system. So one of the things I said beginning with soccer, the system notice difference between a round ball and a round penalty spot for example. It knows the geometry of the sports pitch and can determine, that’s likely to be a ball, that’s not like to be a ball.

David Edwards:

There’s some rules in there too. So various techniques like that are some of the things that we employ. So a combination of learning and rules. So it’s a really clever piece of technology. And so, as an example, what you can do with multiple cameras… Yes, let’s run this piece of content. So here is a soccer game in London. It’s not tier 1 but it shows what you can do with a multi-camera system. So here we switch camera automatically under the AI system. Switch back. We are doing some panning, we’re doing some zooming on the action, we are tracking that sporting action and it’s producing something that is really quite compelling to watch because it’s not just a boring pan backwards and forwards. This particular soccer league, it is covered in the UK on national television but because it is a lower league, it is one camera operator at the top of the stand panning backwards and forwards.

David Edwards:

Now that we’ve implemented multiple cameras. Now you have something that is much more compelling to watch. And in terms of what happens to that human camera operator, well actually we suggest that you keep him the event and you can repurpose that camera operator to do halftime interviews or end of match interviews with the players and with the team coach. And so, that way you really are bringing a lot more of what you see from a tier 1 at event to lower the content and you are producing content that it really is compelling to watch

Jim Jachetta:

I do see the big difference in… So you have the panoramic camera, the multi-camera panoramic usually at midfield, right?

David Edwards:

Correct.

Jim Jachetta:

And then where the sweet spot or the real action happens at the two ends where the… Well, if it’s football, rugby, the actions at the end zones. Soccer, hockey is near the goal. So when you saw how it jumped to those end zone, the cameras at the end, it made a big leap in the quality of the production.

David Edwards:

Yeah. Absolutely. [inaudible].

Jim Jachetta:

Obviously you got to have the budget for two more cameras but it was quite impressive.

David Edwards:

Yeah. Why not play that video again. You probably click on the thing and maybe. There you go.

Jim Jachetta:

Yeah. So now it’s the camera near the goal right now. Oh, I’m sorry. I clicked on something else, unstop, sorry. Yeah. And then when the ball comes back to midfield, so there’s a long shot. Went back to close up. And then it went to the midfield camera array. Quite impressive.

David Edwards:

And it’s really that camera switching [inaudible].

Jim Jachetta:

It’s very watchable. You had other demos too where the midfield cameras were a little higher than desirable. So you’ll make recommendations where to put the cameras and wherever the customer allows you to put the cameras you’ll optimize it as best you can. But I’m sure there’s a optimal location, right?

David Edwards:

That’s right. So this is… It wasn’t a tier 1 stadium as you can see. And in those sorts of events, you don’t always have the budget or the capability to choose the most optimum camera position that you have. And that’s, the case here. It’s a very real example. So the view is quite a flat view across the pitch. It’s not ideal but it’s what is available in this stadium. And you can produce something that is really compelling despite that.

David Edwards:

We talked about the ability for the system to track the ball and what happens if the ball goes out of view. And this system it has a look ahead capability. So it’s actually ingesting video data a few frames ahead of what it plays out. So if the ball were to disappear behind someone for a frame or two it probably reappears a few frames later. And the AI engine reacquires the ball, reacquires on the sporting action, it can then interpolate the motion path between where it lasts saw the ball and where it reappeared. And so, it can give you a nice, smooth pan across the total pixel area of the image. So it’s not fooled by the ball disappearing for a period of time.

Jim Jachetta:

Right. Right. Yeah. This technology really is probably not meant for invenue content. Because there is some latency or you could make that latency longer. So it has more time to find where the ball is or is that what you’re alluding to that you’re a few frames ahead? Can you change out a setting?

David Edwards:

Yes. So we use that… It is a setting. We use it to our benefits that we deliver the introduce of latency to be able to look ahead and see where the action is going. In terms of how this content is output. It might be put out over a live broadcast channel where it might be put out over some sort of OTT service. And of, course streaming over OTT services, you can get many seconds of delay. Maybe a minute of delay. And so, one or two extra frames of latency in the system is absolutely not the end of the world in these sorts of applications.

Jim Jachetta:

Right. Right. Right. Right. Right.

David Edwards:

So, we use a little bit of latency to our advantage to help steer the AI engine.

Jim Jachetta:

So similar to the vPilot, does this have some settings? So in other words, if it’s a sport that has a ball, a puck or a primary object, if it loses the puck, you can say, “Zoom out or go wide to reacquire.” You can give it some rules, some guidelines?

David Edwards:

So in terms of the capability you set the sport. And so, it calls up the particular machine learning algorithm that it’s developed for that sport. And in terms of any particular settings, this one is not manually directed in such a way. So it is always analyzing the complete field of play with its panoramic cameras and side cameras. And so, it always has the best possible view. And so, it steers itself in that production but you can do things. You can have the ability to cut in additional content. You could, for example as I said, mix this on pitch content with halftime in interviews at the sideline of the pitch with the camera operator and one of the players or the coach, you can augment this AI system with the previous studio based AI system that we looked at.

David Edwards:

So you can have a studio based discussion about how that team went and how that game was. And so, you can therefore give what is very much a full tier 1 production type of event with the match itself, interviews with the staff down on the pitch lane and discussion programming in the studio. You can bring all of those elements together if you wish.

Jim Jachetta:

Yeah. On your longer presentation, you had a slide where you mentioned the single camera operator shooting the whole game from one position, the old way of doing it. One camera, one position, that person now is on the field, on the pitch getting reactions during timeouts, putting a camera in the face of the coach or during intermissions. You can cut to that more traditional camera and intermix that with the AI. That’s easy to do.

David Edwards:

You can. And so, that’s something that is very similar to tier 1 experience without adding additional costs. Which is where we came in, isn’t it really? About making the most of new content streams and monetizing them with compelling content.

Jim Jachetta:

Great. Great. Great. So Cindy’s been very helpful. She’s been wrangling the chats. I think we answered most of the questions. So there’s quite a few people interested in the IQ sports. Let me just make sure we answered everyone’s questions. We’ll certainly reach out to everyone after the event. And there will be a recording. We’re doing three sessions back to back. So it’ll probably take us about a week to get all the content up online. I put my phone number, my email, a link to book a meeting with me. Of course, we bring in the experts like David and his colleagues to do demos. Let me see, house of worship.

Jim Jachetta:

We were talking about this. So Art was saying that he works with houses of worship. We were saying that maybe the vPilot might be applicable for house of worship. The cameras are relatively close to the talent. So things have opened up post COVID, it’s hybrid now. Where people are attending events. Maybe 75% of people are back but they’re still 25. Maybe the elderly are not going out to events or people that have health concerns. Maybe speak to that. We talked about that a little bit you and I at NAB.

David Edwards:

That’s right. And so, this is something where we think the vPilot products, this AI studio tool suites that we showed earlier could potentially start to revolutionize house of worship production. It’s set up as a system at the moment, which has… Well, in the example you saw that the cameras were quite close to the presenters but we can move all of that hardware to approximately about 30 to 40 feet away from the stage. And so, that opens up the possibility for some house of worship applications. We have ideas as to how we can do things differently and deal with much, much bigger stages and events, and automatically produce the events.

David Edwards:

Which between the pastor and choirs, for example, at a greater distance. But to start with, we think we have a solution there which is very implementable for distances where cameras might be 30 to 40 feet away from what is happening. So we are interested in actually looking and trying that. So if there’s anyone online at the moment who wants to be part of a trial and we be very interested in speaking about that and then we can learn and [inaudible].

Jim Jachetta:

Yeah. Art, if you’d like to dive a little deeper, we can certainly work with you. So we discussed… I volunteer local church. I know Cindy works helps with AV stuff at her local church. It’s a typical setup, maybe there’s a lectern where the pastor preaches from. I’m thinking there’s usually a band, so you could tie in the microphone so you know who’s singing, right? So you would bring the pastor’s microphone in on a channel, maybe the lead singer, maybe some of the backup singers in, so you know who’s singing, who’s live. The system can’t sense somebody’s mouth is open that they’re speaking, you do that. You get the audio cues from the actual audio channels, right?

David Edwards:

So the audio cues are one of the clues to the AI engine in house, how it chooses bit shots to pick but yes. In a setup of house of worship application that might be one of the more major clues to the AI engine in which shot it would choose.

Jim Jachetta:

Yeah. So here’s an example. So the lead singers could be routed through and then the backup singers, the rest of the band gets coverage when it switches to a wide shot. And then you could do a whole church production without any operators or minimal operators.

David Edwards:

Yeah. And it could add a new level of quality. You’d be using broadcast cameras where maybe you are or aren’t using it, volunteers come and go. And so, here is something that you can set up and you don’t have to rely on people to volunteer to do certain things. And so, yes. High quality and you can just switch it on and you are on air.

Jim Jachetta:

I’m thinking about my church workflow. So the vPilot does the backend have hooks to the social media or do you need a streaming platform behind the vPilot to reach social media? What’s the integration to The Cloud?

David Edwards:

So in this application, it is a video handoff but you can then of course choose your backend for your streaming out to your viewers, to your complication.

Jim Jachetta:

So you can use anything? So you could use OBS or any kind of streaming platform? It would be handled? Okay.

David Edwards:

Any stream platform whether it’s YouTube as we using today, whether it’s Facebook Live, whether it’s a platform that you have had built for you and you have complete control over it yourself. All of those things become possible.

Jim Jachetta:

Well, I imagine vPilot is meant for a more traditional broadcast. I imagine the I/O is SDI probably, right? That’s the primary output?

David Edwards:

That’s correct.

Jim Jachetta:

And then you can buy or VidOvation can provide an inexpensive SDI to USB capture device that can plug into any computer. Then boom. Run your social media software. And you’re good to go. That would be basically the pieces? The workflow?

David Edwards:

That’s correct. Yes. You’ve got one.

Jim Jachetta:

Okay. Okay. Okay. Well, thank you so much David. Here’s some links to where David works. What is it about, six or nine months ago that Vislink acquired Mobile Viewpoint?

David Edwards:

It’s that period and time seems to have zoomed by but I think in terms of that, some of the technologies that we’ve talked about all the way we haven’t talked about today have already come together between the two organizations. So there’s been a lot of synergies and a lot of benefits actually to the users that we’ve already bought by bringing both the technologies that both companies have together. So maybe in a future presentation we’ll talk about some of those new things that both organizations combined to give some new products and some new tools to the market.

Jim Jachetta:

Yeah. Actually I’m going to have David again in two weeks. Let me get my calendar up. So I don’t give the wrong date. But we’re going to do it on Wednesday, June… We usually do webinars on Wednesday. This is Wednesday, Thursday, Friday, this week. But webinar Wednesdays. That’s what we call them at VidOvation. So we’re going to have David again on Wednesday, June 1st. And we’re going to talk about the HCAM. Maybe you want to give a little preview. I’m thinking the HCAM with the focal point can’t do a broadcast wireless camera system without camera control. So maybe give us a little preview of what to expect in two weeks.

David Edwards:

Okay. So HCAM is business wireless camera transmitter and it’s the product that you might see attached to a camera, attached to a cameraman running up and down the sideline of a major sporting event to have the ability to get to right to the action with the freedom that’s being wireless. This gives with almost zero fame latency that you need to be able to cut between a wireless camera and the wired line cameras that might be up in the stands somewhere. And so, to make that cut without seeing any time difference or presenting any time difference to the viewers back home. So it’s a product that gives wireless freedom for those tier 1 major events and gives extra dimension by getting up close and personal to the sporting action. So that’s the product we’ll be talking about in a few weeks time and all of the tools and techniques that ensure that you get tier 1 quality from the sideline wirelessly with almost zero fame latency and [inaudible].

Jim Jachetta:

And David wants to come back. Well, this is your second time. So that next two weeks will be third. For a fourth you’re welcome to come anytime David. You’re a great presenter. And I thank you. We’ll see what little stuffed animals over your shoulder next time that your kids put there. Or we could do a thing, guess the animal that’s going to appear in two weeks over David’s shoulder and then you win a prize. David wants to do… And I think it’s a great idea to do. He’s going to join one of his partners at… I love your nickname from Mobile Viewpoint, call him MVP. That’s just such a great… He’s going to go to MVP and be in the studio with one of his colleagues. And we’re going to do a video podcast.

David Edwards:

Visual podcast.

Jim Jachetta:

Yeah. Yeah. And show the vPilot working in real time with no tricks, no edits. We’re going to stream out the live production of two people talking together, interviewing each other. I won’t be there. I’ll just be the moderator streaming it out to you guys but I think that’ll be great. So I look forward to that as well. So thank you David so much for joining us today. You can reach VidOvation at vidovation.com. Sometimes people have a tendency to put an E in VidOvation. VidOvation. It’s V-I-D-O-V-A-T-I-O-N.com. You can also reach us at info@vidovation.com. And our phone number is (949) 777-5435. And we can loop David at his team into the conversation and facilitate your live production needs. Thanks everyone. Thanks David.

David Edwards:

Thank you. Thanks, everyone.

Jim Jachetta:

See you in two weeks.

And stay tuned for other webinars this summer. Thanks so much, everyone have a great day. Thanks for joining and keep an eye out for a copy of the recording. Thanks so much. Bye-bye.

 

Continue Reading

Taking IP Workflows to the Next Level at SVG Summit with Jim Jachetta

Taking IP Workflows to the Next Level at SVG Summit with Jim Jachetta

"If you attend the SVG Summit, December 12-13, 2022, in New York City, please participate in my panel on Taking IP Workflows to the Next Level." , Jim Jachetta, CTO, VidOvation Panel: Taking IP Workflows to the Next Level [Click] We will discuss IP standards and...

What is Technology-as-a-Service?

What is Technology-as-a-Service?

Many businesses across a vast range of industries have seen their margins shrink and have their growth limited by the cost of investing in and maintaining ever-increasingly complex technology. The benefits of new technology in quality and efficiency are significantly...