How to simplify your remote multicamera productions with CyanView IP camera control & shading systems [Webinar Recording]
Remote Production – Live + Remote Control
This webinar has already happened. You may download a copy of the presentation and watch the video recording.
How to Simplify Your Remote Multicamera Productions with CyanView and VidOvation
Find out how the latest camera control system from CyanView can universally improve and simplify your live broadcast production workflow.
- Capture any event from any angle
- Address camera control, video correction, and video transport from a single system
- Control all stages of the ecosystem from a unified interface
- Leverage IP technology to increase flexibility and reduce cabling and clutter
- Ensure seamless integration of specialty cameras into your production
Jim Jachetta (00:26:15):
Good morning, everyone. This is Jim Jachetta. I am CTO and co-founder of VidOvation. Today, we have a very special guest, David Bourgeois, from CyanView.
David Bourgeois (00:26:26):
Jim Jachetta (00:26:28):
CyanView is one of the leaders… Good morning, David. CyanView is… I want to jump right into what you do, David, but thank you for being here. So CyanView solves a big problem in production, and that is IP camera control and camera shading. On a typical production, we may have studio cameras, field cameras, PTZ cameras, GoPros, high frame rate cameras. So you have dozens of different vendors of cameras. How do we control everything? How does a video engineer control all of this? So David, tell us how you solve some of these challenges. Tell us why we need to know about CyanView.
David Bourgeois (00:27:18):
Thank you, Jim, for the invitation. And I’m very happy to be there with you on this webinar. And definitely, I mean, we tackled camera control, basically working on high-end productions, and meeting the request of having shading and control of specialty cams, that’s where we started. And that’s actually developed, and now we’re covering a lot of brands, a lot of models. And this is what I’m going to showcase here. You see the slides, right?
Jim Jachetta (00:27:52):
Yes, we do.
David Bourgeois (00:27:53):
Okay. So this shows, well, we had, for the beginning, I would say areas of segments that we have control with. Those are specialty cameras, integration with OB trucks, ENG cameras, and digital cinema. It’s tended to develop and evolve into six more segments. And mini cameras is definitely one. I mean, most of the time you don’t have an RCP with mini cams, and it’s difficult to get that in the workflow that you’re using in live broadcast, and this is where we started.
David Bourgeois (00:28:35):
We added PTZ. And now robotics, that [inaudible 00:28:38] that we integrate more and more. When you’re in a broadcast truck, and you have color correctors, we integrate with them as well. In a way that we the had the RCP there, and some of those specialty cameras, you cannot get control on, and you’re using a color correcter to still be able to paint them. And that’s actually where we said, we can put that on the CyanView RCP CY-RCP as well.
David Bourgeois (00:29:01):
And then we have ENG, that was not really the goal to control the main Sony cameras I use on high-end events, but there are many situations where you cannot control them. And so that’s where people wanted us to still get control on that. And the same for digital cinema cameras. We’re talking about the FS5, FS7 from Sony. The Panasonic EVA1, or the Cannon C200, C700 range. So these, definitely, it’s nice add on to have in our CV, to have control. And when you take all this, you will now have the project that we’re going to talk today, about remote production. It’s to control all of them, but over the internet.
David Bourgeois (00:29:47):
So that’s the summary of all the things that we’re doing. Now, what are our products? Our products these are a bunch of different remote and small box, that are used to interface cameras, to do color correction, or to interface with GPIO or different parts. And so with these modules, those are kind of building blocks that you can use to build your workflow. And to say, “Okay, this is what I need to control, this is what I need to interface.” So those are the physical parts, but there is a lot of software development that we need to integrate with various piece of other equipment. This would be a simple diagram. So you have the RCP, that’s the main piece, which gives you all the camera control. And now, if you take one of those many cameras, they are going to be serial, but our system is all IP-based. So the CIO is a camera interface, that’s going to take the IP control and interface that with any of those serial cameras. If you have an IP camera, like a PTZ or a Panasonic ENG, in that case, you don’t need an interface. It’s just on a network, and you can control it directly. So that’s, I would say, the most simple workflow that you can get. Just put your RCP, a switch, and you can paint IP cameras. If you add the camera interface, then you can also paint the serial cameras.
Jim Jachetta (00:31:24):
I have to say, David, the products are very, very well-made. Yeah. You see how there’s a quarter 20 screw thread on the bottom of the unit, then a quarter 20 receptacle to put the camera on, so you can mount it on a hot shoe on the camera. Or in this case, between the tripod and the camera, just sandwich the controller there. And they’re made out of extruded aluminum, right? They’re very rugged, very well-made.
David Bourgeois (00:31:55):
Yeah. It’s a CNC aluminum, indeed.
Jim Jachetta (00:32:00):
David Bourgeois (00:32:00):
And yeah, it allows us to have a small block, but strong and, indeed. I mean, if you see the cable, it’s short, it’s an adapter to the [inaudible 00:32:12] camera. I will show you some other cameras. We have a [inaudible 00:32:17] of short adapters for any brand and model of camera that we control. And then you can, of course, extend that. If you want to put the CIO at another place, then we have extension cords. Or a lot of customers build their own cables. We have all the [p-notes 00:32:32] available on our website, the support pages.
David Bourgeois (00:32:35):
Now, when we are talking about integration with a higher-end production, this can get more complex. And this is one of the typical applications that we have here. On the left, it’s still the same. I mean, you have cameras, IP cameras, or serial ones, interfaced. Everything goes to a switch. I mean, usually stage boxes today have a LAN port, or you can have a media converter, a switch with an SFP. On the truck side, or the control room, this is where we have more software. So you still have the RCP there, but we integrate with routers in a way that whenever you have, let’s say, 50 cameras, we go up to there, you control 50 cameras on a production, then you want to be able to synchronize the router and the RCP. So whenever you’re going to select the camera on the panel, that you see on the right, you would pick any signal and camera. The RCP will track that, and will automatically select that camera, so you can immediately paint it. You don’t need to select the camera on the RCP itself.
David Bourgeois (00:33:45):
Now, if you select the camera on the RCP, then it will synchronize with the router again, it will control that. And just switch your monitor, so you actually are going to see the camera that you’re painting. So it’s both ways. It’s like the preview that you have on your standard RCP, except that because we control many cameras on one single RCP, we need multiple previews, and so that’s the way that we do it. Whenever you click preview, we know which camera has been selected on the RCP, we request the signal on the router. And the other way around, when you select a new signal on the router, the RCP will just know which camera it is.
Jim Jachetta (00:34:21):
Wow. Yeah, we were talking about this… I was talking about this with you, David, but also with a customer yesterday. I’ve done some video engineering. I volunteer, well, before COVID, I used to volunteer every third Sunday as a video engineer. And Saddleback Church uses Sony cameras, so it’s an eight camera production. So there’s eight Sony controllers, CCUs. Eight sticks as you were calling it. So, I either press on the joystick to pick what I want on the monitor, so that syncs with the switcher, or I press the buttons on my switcher to paint-
Jim Jachetta (00:35:00):
[inaudible 00:35:00] or I press the buttons on my switcher to paint. So you eliminate the need to have a stick or a controller per camera. It’s kind of overkill.
David Bourgeois (00:35:12):
It really depends on which kind of productions, but definitely, especially with the mini cameras where we started. I mean, if you have 20 mini cameras, it makes no sense to align 20. You have no space in the tracks.
Jim Jachetta (00:35:26):
David Bourgeois (00:35:27):
If you’re talking the Superbowl or the NFL, our system is used now also to control the pylon cameras, and you have 12 of them, you don’t want to put 12 RCP there because… And so the way it works you would just indeed pick the camera you need on the router panel and the RCP would just select it. For many cameras like this, it’s very, very convenient. You can still line four RCP if you want to control four cameras and I would say this makes sense whenever you still, with two hands, want to control four cameras at the same time.
David Bourgeois (00:36:03):
So you are going to look at a multi viewer, in that case if you want to be able to lively paint those four without having to switch or to do this, this is a way where it makes sense to have four RCP. I will come on a slide just explaining another type where it makes sense but when you’re talking about many mini cameras or even, I mean, PTZ or having different kinds of… If you have too many cameras and you are not going to paint two or three at the same time, we made it actually very convenient to just get one RCP and paint them all just sequently.
David Bourgeois (00:36:47):
I mean, we developed that within the 24 Hours of LeMons. That was one of the first projects that we did with the MP visual in France, and we had like 40 cameras, onboard cameras on one RCP. And we’re actually had two RCP, but we were two people painting them at the same time. So we could paint the same camera but we… Well, that would be no problem, but we just share them based on the need that we had. We also-
Jim Jachetta (00:37:14):
[crosstalk 00:37:14] like you have the flexibility, so your control can co-exist with the more traditional or the camera manufacturers control. You can mix and match, and co-exist.
David Bourgeois (00:37:26):
Indeed. You just work the way you want. You can have maybe two RCP that you want there all the time with a stick to one camera and then have a third one that would cover all the other cameras.
Jim Jachetta (00:37:36):
David Bourgeois (00:37:37):
[Crosstalk 00:37:37] possible. On the slides, you also see the color correctors, and again, if you have a bunch of cameras that you cannot control, it said those are on board cameras. You don’t want to send control wirelessly to this. You are going to put an AGA or any kind of color correctors there and you… I mean, can be [inaudible 00:00:38:00]. In that case, we would basically control the color corrector from the RCP. So you can do that from your shading position.
David Bourgeois (00:38:10):
I mean, you don’t have to suddenly go to a computer or to go to the small nubs that you have on the frame units. So that’s definitely one of the… We have people just getting our RCP only for color correctors. They have no camera to paint. They are just using it because in an OB, when you have eight, 16 channels of color corrections, it’s still very convenient with that.
Jim Jachetta (00:38:33):
David Bourgeois (00:38:34):
[inaudible 00:38:34] the hands of video engineers.
Jim Jachetta (00:38:37):
David Bourgeois (00:38:40):
To give you a couple of ideas of what people are using our boxes is for the different applications there. On top tier productions, this is all about specialty cams. I mean,- they have plenty of Sony RCP and Sony CCU, they stations. There was nothing to bring in there, but we’re covering everything else. And the picture tells it all. So this is, I would say the only other RCP that you need to control many cams, PTZ, whether hi-motion, other kind of specialty cameras that we have, your color correctors. Robotics, we’re starting to integrate more and more [inaudible 00:04:15] because some of them are not really IP or you want to synchronize that with a camera. So that’s also something that you are adding in the workflow. Wireless is… We’ll talk about that a bit later, drones. And CCU is actually, it’s not really a CCU.
David Bourgeois (00:39:36):
It’s a color corrector, but it’s our own. We developed it in a way that it’s small. It has the functions of a CCU like detail or multi matrix and these kind of things. And the reason we develop that is to be able to match cameras. If you’re taking most of the standard color correctors, they have a proc-am, so a video processor inside, which is legalizer and you can normalize the signals. You can adjust the blacks, the gamma, the whites, to make the signal clean, and the whites would be white. But if the red is pink, you cannot take the red and put it in the right place. It’s impossible. If you take two different cameras that have two different sensors, it’s impossible to match them. And that’s the reason why we said, we need to develop something that will act more like a CCU that will allow you to have the controls you need to match cameras together.
David Bourgeois (00:40:25):
And I would just show this right after, but I would say some of them are still projects that are going on, and we’re working with these companies trying to figure out the right way to do that. But many cameras, PTZ, and POV require color correction with the CyanView VP4 Camera Color Corrector are definitely products that are getting mature now. And that you can easily put in place yourself. The other ones are projects that we’re working closely with some big companies there. Those are for the top tier productions. Now, if we move to smaller productions in that case, they don’t have these system cameras, all the Sony cameras and so on. So there are going to take ENG cameras, camcorders, and they’re going to have live productions with this even digital cinema cameras. And in this case, we’re actually not controlling the specialty cams, but the main cameras of these productions.
David Bourgeois (00:41:22):
So those are going to be compact camcorders, the bigger ones, digital cinema, as I said, Sony, Panasonic, Canon, Blackmagic. We are in that as well. And, and you can still have many cams. I mean Marshall range shape industries, or use there. And for robotics, we also integrated gimbals. So if you put a gimbal, I mean, a Marshall on a gimbal or the small panty thread from Marshall, we integrate that as well. So those are a piece of equipment that you can not control from our system.
David Bourgeois (00:41:57):
So that’s really a different setup in that case, you’re saving money, but mainly you would normally not have an RCP. You would not have control on this camera. So it’s basically adding shading capabilities now in these small productions. While earlier, you would basically do that over the cam [inaudible 00:42:19] to come asking the camera operator to put a bit more red or blue. No, it’s an easy way to still be able to paint everything. And if you have like a cart to fly back or something like this, and you have no space, this is where it makes a lot of sense to have one RCP to control everything.
Jim Jachetta (00:42:38):
David Bourgeois (00:42:39):
And then third, I would say different types of applications that we are doing today is remote production. And that came… we develop that even more this year with the situation with COVID, but we started earlier and that’s really taking all the cameras that we control everything, but now you want to control that over the internet, or you want to control that on a motorbike if you’re doing a cycling race or a marathon and you have an ENG camera. And the door so many, I mean, we are going to talk about anyways, but so many video transmission using cellular today. That the next step was to be able to have camera control there with these kind of situations. And so on the top left picture that you see there, there are four, RCP, and this is a remote production where they would do multiple soccer games at the same time.
David Bourgeois (00:43:34):
And we have four RCP there, all the games are four cameras. And if you select game one, the four cameras are going to take the four, the four RCP, we’re going to take the four cameras of game one, and there might be four Sony cameras. And then if you select game two, then the four RCP are going to swap to the four cameras of game two. And so you have like a multi viewer in front of using three games at the same time, and you can just select the game that you want to do. And then you have the four RCP to paint them. So that’s another situation.
Jim Jachetta (00:44:02):
[inaudible 00:44:02] the important thing too, is that your appliance on the far end, when you switch the camera doesn’t lose its settings? So it stays? The iris and the blacks stay where they’re set, correct. When you, when you jump from game to game to game or camera to camera, correct?
David Bourgeois (00:44:19):
Exactly. So the RCP is just saying, no, I want to control this one. But actually the when I say it’s a complete architecture, the old cameras are still controlled all the time. It’s not because your RCP is not actually actively controlling this one. The system is still controlling it. Then you can have like another RCP. You could have another kind of interface, which actually just still can control the camera itself. [crosstalk 00:44:46]
Jim Jachetta (00:44:47):
You can have a backup operator working from home or another location controlling that camera so the workload could be shared amongst people in different locations. That’s why this has proven so valuable during COVID and locked down and social distancing. Correct?
David Bourgeois (00:45:06):
Indeed. I mean, you, you, you can control that over the internet, as I said, you could also have an RCP locally, it depends on the kind of production and then remotely you can control that. And now we are adding the possibilities to have a completely distributed architecture in a way that you will be able to put an RCP at any place to control anything. I would say you might have multiple productions at the same time, and from one RCP, you will be able to control them all, but you might also have a second RCP at another location, or like you say, from home and still having paint control on all these cameras and not only paint. I mean, if you want to move them, framing this kind of thing, so it can be done as well.
Jim Jachetta (00:45:51):
David Bourgeois (00:45:52):
And so those are, again, all the cameras that we support, but it’s now used for any style productions. This one is just a slide to showcase a lot of brands that we’re integrating a lot of specialty cams, because that’s where we started. They’re a bit less of ENG, but that’s what we’re working on at the moment. So we have Sony, Panasonic, Canon, Blackmagic, and we are going to soon add support for Grass Valley, Ikegami and JVC. All the other parts are quite developed, and we need to add a bit more color correctors, but we have quite a good of them already. And so that’s sort the game…
Jim Jachetta (00:46:44):
Go ahead. [crosstalk 00:46:45] What I was going to say in our experience that shading of cameras, some people just say, well why don’t I just stick the controller into the public internet or into the network? Why won’t it just work magically? Some camera systems, what we’ve found and you can correct me David, they’re very sensitive to latency on the network. So if you go through the public internet or you go through cellular the RCP in the camera will time out, if there’s more than 10 milliseconds of latency or five milliseconds of latency. They’re really designed for being in the same studio on the same network, right? So your technology smooths out or really gives the ability to communicate over the internet. Correct?
David Bourgeois (00:47:30):
Exactly. And maybe I will come to that a bit later. Here is exactly what, what you’re saying. I mean, the CIO interface is going to take some IP packets and send that to the camera. And those mini cams are a surreal base and surreal base mean that they receive some data, they are going to send the data back less than one millisecond after. So it’s very quick. And if they are requesting something and you don’t answer, if there is a kind of fact knowledge system in place, and you don’t answer to that within a couple of milliseconds they would just reset the connection or this kind of thing. A Sony camera will basically go back to its own. If you take a camcorder, that’s normally a camera that you will operate from, I mean the camera man himself will operate the camera.
David Bourgeois (00:48:23):
So all the settings are done through the, the menu or the nuts of the camera. Now, if you are controlling it remotely, you plug the cable in that. If you don’t answer to the direct knowledge of the camera, I would just think that’s, the RCP is not there anymore. And it comes back to the its own gofer mode. And some of the cameras just change the settings completely. So you’re losing all the painting that you did with the RCP, suddenly hours goes way out, and these kind of things. So definitely controlling cameras locally is what you should start. But when you’re talking about going over the internet, even though everything is IP, it’s not the same story at all. It depends on which protocol the cameras are using, and if they’ve been made. I know no camera today have been made to be controlled over a latency network and the internet. They’ve never run like this. It’s only the light. [crosstalk 00:49:17]
David Bourgeois (00:49:20):
I know companies like Panasonic, Grass Valley and so on have their own solutions today for remote productions. And it’s really adding layers to what they had already or redesigning that completely from scratch to make their system work. And what we did, I will explain that later on with radio, what we did is take all of these kind of local control that we have and bridge that over the internet. So the camera’s still is controlled locally, but we can put the RCP wherever we want.
Jim Jachetta (00:49:51):
David Bourgeois (00:49:51):
I will explain that a bit more on another slide. Right now, this one is really on local control. So let’s say the RCP, you have a couple of switch or local network, and you want to control these several cameras. You just use this CIO interface and connect the camera and you get control very quickly and easily.
Jim Jachetta (00:50:14):
So the CY-CI0 Camera Serial to IP Control Converter you’d use on the local network and the CyanView CY-RIO Camera Control & Shading over the Internet & Cellular, which you’re going to get to in a minute that you would use over internet or over cellular, or if there’s some latency in between the camera and the RCP, correct?
David Bourgeois (00:50:30):
Exactly. You, you should be able to see in the CIO is quite advanced in the number of protocols it supports. So we support as best for gimbals. We support them in different kinds of proprietary protocols that we have in there. But otherwise, I mean, so translation is quite complex, but you should be able to see that like a USB converter that would be from USB to RS232, this one is IP2 a lot of solar protocols, but it doesn’t have any intelligence in a way that it doesn’t know what it does. It doesn’t know what it is gain or something it just receives a packet over IP and put that on the right electrical interface for the camera. So it’s been designed to support all the protocols that are using broadcast. That’s what make it very different from just out of the shelf, IP to RS232 converters, or these kind of things, because we are still going to add a specific things, depending on what camera it is.
David Bourgeois (00:51:32):
If it’s a Sony, if it’s a Viscal or like Marshall, Aida and so on, we’re going to do special things to make sure that it works well. But beside this for you, you can still consider, this is an IP to serve interface. Radio is like a computer, which also has an interface built-in but the major part is the software running on it. That would just handle all the protocols that we support. But yeah. So I can maybe do a demo of these to show you. You have one of those typical Marshall [inaudible 00:52:05] so a lot of them or maybe a Sunny brook and you want to control that. So let me switch to this. There is latency there, I was waving my hands. So you will see a bit latency in the video but that’s also… Here on my desk I have the RCP. I was switched to the other one here so that you will see the RCP there on the right, what I’m doing here. And then I have a couple of mini cams here. I have a Blackmagic router. I have own VP4, so that’s our video corrector, a PTZ from Panasonic over there.
David Bourgeois (00:52:59):
First, let’s say, I’m going to take this, this is the auto mini from Dreamchip. We have a Marshall, we have an Aida camera and the IONS3 here, they’re all configured. They’re on the system. So right now it’s running, but let’s assume that it’s actually not. Let’s configure it all. I will go show you the configuration interface here. So the atom camera is this one I’m going to delete that one and we are going to recreate it. So whenever you have a new camera you want to unscrew that one as you see on the CIO, there is a screw at the bottom as well. So you can put that on a tripod. I screwed the camera here. We have those cables that are adapters, that I will plug on one of the four. So the the CIO’s interface is two ports, port one, port two.
David Bourgeois (00:53:58):
I’m connected on port one right now. And the serial number of these one is 12, 110. So the cable is actually going to power and control the camera. So both go in that. So the CIO is POE. I just have the ethernet. The other cable is the video through JZI. So just one ethernet cable POE powers the camera, controls the camera. Actually you still see the configuration interface there. I was just repeating a bit myself, but this is the camera interface here. This is the cable. So you just plug the cable in one of the connectors here. This is a POE cable that drives and power the camera. So you see now if I just plug this, it’s going to start.
David Bourgeois (00:54:49):
If the version doesn’t match one of the controller, that’s going to be boatload it automatically. So you don’t really you have to handle any update of these boxes, they are going to be kept up to date all the time. And now powering and controlling the camera, but I need to configure it. So to configure it, I go to that configuration interface here, I click on the plus sign here to add a camera. So, that was my camera number four, I put the name, it’s an atom camera. I will select…
David Bourgeois (00:55:22):
These are all brands that we support today. And it’s just some specialty ones like ERC in UK [inaudible 00:55:30] make Aida in the US, AGA, Birdog, Blackmagic, CIS, Camera core in the UK, Cannon. So there are some very known one side and we don’t do cameras, but we have a kind of generic protocol. So if you want to use a generic camera and develop your own interface, you can do that. Dreamchip [inaudible 00:20:48], Marshall, Panasonic Russ for the pivot cam, Sony, Toshiba, which is now owned by Canon, Vision Research, Yoshida. But here I have a Dreamchip, so I select Dreamchip. And in that case, I have the list of models that we support.
Jim Jachetta (00:56:06):
[crosstalk 00:56:06] are you frozen Dave? Well, maybe we couldn’t see the dropdown menu for some reason it’s not visible.
David Bourgeois (00:56:13):
It is actually possible that the dropdown don’t show up because,
Jim Jachetta (00:56:20):
So what, what David did there, you see on the right kind of middle where it says brand, he clicked there and I’ve seen it myself. There’s a long list of about 30. You pick the camera manufacturer you want, and then it automatically sets the protocol pretty much, right?
David Bourgeois (00:56:39):
Yeah. I will just share the complete screen. So let me get that out of the way. If you see the screen right now.
Jim Jachetta (00:56:49):
Yeah, we see the control screen.
David Bourgeois (00:56:52):
You should now see the drop down.
Jim Jachetta (00:56:53):
Yes now we see it. That’s very impressive. So you see folks, all the camera choices are there. So if something is missing, you can create your own protocol. If you now how to configure it, you can do it manually.
David Bourgeois (00:57:09):
Now if you take the cyan, that that is a generic protocol. I would say if you’re a developer manufacturer of some specialty cameras, and you’re looking just for an RCP that will be able to be used to control your own cameras or to do anything you want, then you can use that model. And there is an API that comes with it. And so you can see the values changing on the RCP, and you can use that to interface your own equipment. But otherwise, if you are using a camera in broadcast that we don’t support, this is the way we added so many is that our customers are just saying, okay, we’re going to use this one. Do you support it? We say, no. Is it possible to add the camera? And then we do the development and we add that protocol. So the list is [crosstalk 00:57:54]
Jim Jachetta (00:57:54):
You don’t sleep, all you do is write the camera protocols.
David Bourgeois (00:58:01):
Exactly. But we’re a team. So there were a couple of, I mean we share that. But I’m part of this, and then there is mainly one person behind all these sort of protocols and handling all that. And it’s a challenge somehow because all the protocols are different. They don’t work the same at all. So it’s quite an adventure. But we… The list has been growing in a way that at the beginning. We just had the models of camera we support. But that, that was like three pages long. So we have to split that by brands and manufacturer. So now you select Dreamchip then you have the models that we support from Dreamchip.
Jim Jachetta (00:58:40):
David Bourgeois (00:58:41):
They have more cameras actually. But most of them use the same protocol as this one. So here I select the auto mini. So that that’s the way you add any camera. You take the brands, you select it there, you select the model. And then you need to decide on which interface that is. So now, if I go to a interface port, I said on the CIO that I have here, it was 12, 110. So this is the one that’s easier. And then I have to ports, port one and two. I connect it on port one. So I’m going to select port one.
Jim Jachetta (00:59:17):
Now let me interject. So that 12 110 is on a sticker on the bottom of the device. Correct? That’s how know which… because you mentioned having 50 specialty cameras in a production, which CIO do I need to talk to? Right. The number it’s part of the serial number, correct?
David Bourgeois (00:59:37):
Yeah, indeed. And on this camera, let me go back to the… I did share my full screen. I need to come back to the..
Jim Jachetta (00:59:57):
David Bourgeois (01:00:00):
Jim Jachetta (01:00:00):
Oh, you took the OBS output.
David Bourgeois (01:00:09):
Yeah, indeed. So I can also show the output of the camera there. So we have this… there’s just one thing I forgot. Let me…
David Bourgeois (01:00:24):
So I will just rewind this. Back to the configuration interface that you see here. Let me, okay you see it. So let’s say I will select the other port I’m on the port… So I’m on port one. I will just select port two right now to show you what happens. So it’s the wrong port. So what happens on port two is the camera is right here. If you look at the RCP, I will go to camera four that I added. So now I have camera four, the atom, and you see a white icon on there as well. And we are going to see that I will show you after we have a stream back here that also shows the number four on the atom and on the CIO, I can show you that right away also. If you look here on the CIO, you see that on the right. I have a small number four and on the right, it doesn’t work because I didn’t connect anything on the right. So now I’m going to go on my configuration again and change that to the right port. So as soon as I add the right port, so now I selected port one, you’re going to see that…
Jim Jachetta (01:02:01):
It’ll go green.
David Bourgeois (01:02:03):
Exactly. It should come green. And this is actually where…
Jim Jachetta (01:02:12):
Give it a sec.
David Bourgeois (01:02:14):
It will, just that these cameras are not… I should have selected another one. This cameras is in 9,600 bolts, while the default is another one. So I need to change that. As soon as you connect the camera, it turns green here. And if I do actually come back to there, you will see that the number now is on the same side as the connector. And we have a big four here, meaning that the camera has been detected. And if you look at your CP it’s green, and if we look on the configuration interface that we had there it’s screen as well.
Jim Jachetta (01:02:48):
Another cool thing I noticed. So obviously a studio camera will have a tally light. David has tally lights on the edge of the little box. So a little specialty camera doesn’t have a tally, right? So you can light the tally on the box?
David Bourgeois (01:03:03):
If you go to the tally, I’m I can force it to on or off on…
Jim Jachetta (01:03:15):
It’s kind of hard to see.
David Bourgeois (01:03:17):
Yeah. It’s hard to see, but actually it’s there just on my finger there.
Jim Jachetta (01:03:22):
I see it.
David Bourgeois (01:03:25):
I put it off and on.
Jim Jachetta (01:03:26):
David Bourgeois (01:03:27):
You can also add an LED on the side, we usually use a second port and then you put an LED and you put the LAN ports, anything that you can control that way.
Jim Jachetta (01:03:38):
So you have what, like a GPO output to control the bigger lamp. If you need it, a bigger tally light.
David Bourgeois (01:03:46):
No, but you are also going to use the 12 volt outputs that we have right there to power the LED directly. So that’s kind of easy. I have actually a one here, I will show you it after, but it’s a small connector. I put an LED on that. I connect it there and then you, you can use it. So now it’s on, but I can use it for a [crosstalk 01:04:04]
Jim Jachetta (01:04:04):
Much more visible.
David Bourgeois (01:04:07):
I can actually show that right now because we’re talking about this. If you go to the GP IO page here, and so that’s my camera for atom here. So I can also turn on and of the tally here, but now what I want is to put, so my CIO is here and port two power is here. I will just click in there, which will mean that now, I have a red tie signal that is going to drive the LED. So you see right now the LED you don’t see, I will switch back to this. So if you see the LED is off right now at the moment, and if I turn on the tally again, you see the LED is on. Well, that’s all you need to actually decide that these kinds of signals GPIO should be linked to the tally of this camera.
Jim Jachetta (01:04:59):
Well, and then obviously you have the configuration ability to tie this into your production switcher. So an operator’s not, you’re just doing it manually right now because you don’t have it programmed, you don’t have a production, it’s not programmed that your production switcher, but you certainly can do that. Correct?
David Bourgeois (01:05:17):
Yeah. We, we actually integrate with some protocols already. We’re going to a TSL, but we have that it can come from a router, but mainly a switcher, like an atom. A TSL Willow will allow you to come from anything. We also have GPIO so there is one that comes, there is a GPO device that plugs to the RCP and we have a new device, which is called the NIO, which is a Network IO and will allow you to have up to 16 GPU on an IP network. And we have a picture of that on the slides a bit later on.
David Bourgeois (01:05:53):
So that’s basically, so D this interface I showed here is to configure some GPIO to act like a tally or green tally or a core signal, or any kind of GPIO you could use to do something else. Lots of customers actually using that to trigger wipers on some of the cameras outdoor and these kind of things. And after that, if you have the GPIO, so those are the inputs. If I plugged one on the interface, I will have a lot of inputs here that I could just basically use this grid to say, which inputs should go to which outputs to control tally. So that configuration is done at the same place here.
David Bourgeois (01:06:35):
So back to this example. So that’s basically, now you see the camera is green. I have control on that. So I see the value. I can change the gain on the RCP itself. I’m not sure actually which signal we see on the router, just because I didn’t configure the router yet. So if we go to the router configuration, I have a router, which has been added here, if you want to add a route or you go to the plus, and then we support, the video hub the ATM Everett score arts Provo probably is going to cover a lot of routers like Ross VSM cerebrum, those are management system. I guess TSL should be there as well. So you just add a block there. And in the properties of that one, you put its IP address.
David Bourgeois (01:07:34):
Then you select the number of inputs you want, the number of outputs, and you will not see the drop down with that in that nine was unconnected. And this is where I had the atom connected. So I’ll just select in the list, atom. You will see it. Now I selected Adam there. And which means that now, if I press on the preview button, you just so that the signal there on the top changes. So if I go back to another camera, like the previous one, and I come back to the atom, no, it actually knows to which input signal we it is connected. And there we are. So the picture, this is where I can, we have no airways, but I can put a lot of gain there. I can go to the camera and wide to balance and trigger another wide while on that one with a little bit of this kind of signal.
Jim Jachetta (01:08:31):
Okay. So you went into the router settings, you did it kind of quickly went into the router settings. Then there was a list of inputs on the router, I think. So you said number nine, was it, had, it said, unconnected, you went in there, pulled a little drop down and then you connected the camera you wanted on that input.
David Bourgeois (01:08:51):
Jim Jachetta (01:08:53):
It’s that simple?
David Bourgeois (01:08:55):
Yeah. So here I will just do it. I think with the strong lighting that I have in France to a white balance is not doing great, but I can do that manually. So I have my, red gain, blue gain, the black levels, black balance. And then on the main page here, we can change the gamma, we can change saturation, we can change the detail and the filters when this is supported. We have C5’s you can load safe and zero is the default. So when I press apply, I’m just loading back the default settings from the beginning. And another thing, which is quite different on our system is that you can basically load or save all cameras at the same time. So if I click on all and I have multiple slots, but when I press on apply there it’s going to save and recall the settings of all the cameras at the same time. And so if you have 20 cameras, you want to save that you just save all and it’s instant.
Jim Jachetta (01:10:02):
Don’t have to go into each one individually.
David Bourgeois (01:10:00):
… and it’s instant.
Jim Jachetta (01:10:01):
You don’t have to go into each one individually-
David Bourgeois (01:10:04):
Jim Jachetta (01:10:04):
…where you 50 of them. One save, two save, three save. It just saves everything. That’s very powerful.
David Bourgeois (01:10:14):
I mean, I can trigger the auto exposure. I can go back to a manual in the paint menu. We have black gamma saturation detail. Not all cameras actually support all that. So you see when they’re the detailed frequency and coring are not available on the dream ship camera. So we just put dashes here in so you know that these settings are not supported by this camera model. None of the functions we know, try to add, turn on and off so you can see the difference what it does, even if the camera doesn’t have an on and off. And we have a paint to page for some of the things like the matrix that you can change, the multi matrix. I will come to that after. Knee whites clip skin tone slack on the Panasonic PTZ that’s available.
David Bourgeois (01:11:04):
Under the camera there’s OSD. I will show you that with maybe the Marshall exposure index, white balance, tally, I showed that. And then some custom buttons depending on the functions that we need there. And lens, there is nothing on that one because we don’t control the lens, but with the PTZ, you will be able to do a couple of things. So, I mean, that’s it. If I can show that, I mean, one more time. I can, we have two ports on this DIO. I will just plug a DIO industry on port 2. [crosstalk 00:01:34].
Jim Jachetta (01:11:33):
So you don’t need other CIO if you have another camera nearby? You just plug it right in?
David Bourgeois (01:11:39):
Exactly. And after, I mean, you can put two cameras and depending on the protocols, like Marshall, VISCA, dream chip and would probably support IO industries as well. We support the burst of cameras, which means that if you have already a burst, you should be able to go from one camera to the second, to the third. And we, you see the features there at the bottom that we have in the configuration. This is a new functionality that we developed. And then we have a couple of bursts there. So to support S burst for gimbals, VISCA, Blackmagic also. We can support multiple cameras on one interface and dream chip that is going to expand. I had an IO industry here. We’ll just configure again. We’ve selected interface. I connected on port two. So you will see that now I selected port 2, disappears and hopefully yeah, the camera turns green. And now if we go back to the…
Jim Jachetta (01:12:35):
It just went green.
David Bourgeois (01:12:37):
Exactly. Oh, so that’s the delay on the video, but…Yeah. So you will, I’m not sure you will see, but now we have two numbers with a four on the left saying that camera four has been connected there. We have a number two there on the right saying that the camera is there. And so if I go to my camera two here on your CP, you see that it’s an IO industry and the icon is green now on the RCP. What I want to show actually with these icons is-
Jim Jachetta (01:13:08):
You’ve got to open the Iris on that baby. It’s very dark.
David Bourgeois (01:13:11):
Oh yeah, indeed. I mean, it’s just, yeah, that’s the right camera indeed and I need to put a.
Jim Jachetta (01:13:26):
Quick number two.
David Bourgeois (01:13:31):
Yep. There we go. And what I would like to, what I want you to show is that sometimes you don’t know is the camera ports, is it working the way of control on that? And so if you disconnect it, you will see that actually it will go red. So you see the right now on the RCP. You can see also the number here has turned small. So whenever you do the setup, you plug the camera on the CIO. If it’s configured already, you will see. When the number is big, you know that it’s working. On the RCP side, when it comes green, you know that it’s actually working as well.
David Bourgeois (01:14:15):
Yeah. So that’s a way…I will take, I mean this is the CIO industry on that one. You also have OSD, you can enable the menus there. I know what happened is that I manually turned the Iris there. So it was quite dark. So you can go to the OSD, you can change a lot of things in there. So all the settings that actually are not available for the protocol, or we don’t necessarily support because some of them might make no sense, then you go to the OSD. You can put that there manually.
Jim Jachetta (01:14:51):
So I’m not aware of any camera control system that will go into the OSD of the camera to go deep into the camera settings from the engineer workstation. You usually have to go to the camera and do those types of settings. If you want to change the gain or something, or the shutter speed or something like that, you can go, but can you do that with every camera or some cameras?
David Bourgeois (01:15:17):
All the cameras that actually do support an OSD. Like I said, the dreamship they don’t have, and they probably don’t need. They’re having a software on the side. But if I take the AIDA for example here, I go to camera menu, and then I can navigate the menu. You see from the…We even assigned that on the encoder so that we can more easily go and jump into some menu and turn that off. If I go to the Marshall, that’s basically the same. I have also a mini structure where I can change this. I’m just going to wait for the video.
Jim Jachetta (01:15:56):
That’s very, that’s extremely powerful that, particularly if you’re going to the public internet or cellular. Maybe if you had a computer on the network, some of those functions you might be able to get to, but every camera would have a different configuration, a different setup. So you can bring that up right on remotely from the video engineer’s workstation. That’s amazing.
David Bourgeois (01:16:23):
Yeah, exactly. And again, if you’re taking a camcorder, a digital cinema cameras, Sony FS5, FS7, the Panasonic Varicam the Canon C200, they have a lot of menus or a lot of options there. They are all not available on the protocol. And so you can basically go and change everything from there.
Jim Jachetta (01:16:44):
David Bourgeois (01:16:45):
Again, I mean, I will show that on the PTZ it’s maybe the right time to show you this. So those are, I mean, it’s a bunch of cameras we have here then that we control on many cams but if I go to PTZ, that’s basically the same. We support a lot of them already. So, how does it work when you configure an IP camera now? Is it as simple as that? It’s actually even simpler. So if I come back to the configuration that we have here, I was selling the Panasonic. It’s already configured here. But the thing that you do is you select the brand, Panasonic, you select the model. We have a long list of model. I will just, yeah, there’s probably 20 entries there all the PTZ model Varicam so you don’t see the drop down, but basically all the P2 camcorders, the EVA1 and also the P2 as well to support with the CIO. There on the IP, we have a lot of discovery with Panasonic. So some of the IPs are going to be filled automatically. Otherwise you just put the IP of the camera here and that’s it. I mean, you don’t need more than that to get control on the camera itself. And-
Jim Jachetta (01:18:06):
The Panasonic you ought to discover, or just have any PTZ camera that’s on the network? You punch in the IP, it will automatically figure out the brand, or you have to program the brand and then put the IP in?
David Bourgeois (01:18:22):
Yeah, no. So you’re selling the brand and model. And after if the IP is discovered, it will be selected automatically. This is something that will probably evolve along the years, where we were going to add maybe more discovery listing all the devices that are discovered on the network so you can just select the ones that you want to take into control. I mean, it makes sense for PTZ because some of the productions, they have many, many of them. So it would save time. But I would say today, just this process is quite straight forward. When you have multiple cameras at the same time, same type, if you just select that camera and click on the plus, it will automatically create a new camera of the same type. [Crosstalk 01:19:05] are all things that we put in place to be able to save a bit of time.
Jim Jachetta (01:19:11):
Many customers are using NDI. Do you not need this capability if you have NDI, or do you work with NDI? How does NDI fit into the equation?
David Bourgeois (01:19:24):
And yeah, actually they start with the…The controls are very limited at this stage. I mean these controls of the dependent tilt and this kind of things are included in NDI, and we have the…We’re going to support that as well at some point where we’ll add this. But I would say even with the Bardach and all the cameras, if you take Panasonic, Sony that our NDI today, at least today, the status is that the control protocols for the painting, I mean the shading or everything else is still their own protocols. So, that’s not part of NDI. You still…It’s IP so you don’t need actually to put everything in NDI. You just, that’s the same cable is the same everything. And you just use the protocol of Sony or Panasonic, VISCA, or anything to control the camera, while the video is being, instead of going through a SDI, will go in the NDI on the same-
Jim Jachetta (01:20:19):
I see. So since the NDI camera control capability is limited, you would just use the NDI to carry the video, and then you would do the control of the same camera with Scion view?
David Bourgeois (01:20:34):
Jim Jachetta (01:20:34):
David Bourgeois (01:20:36):
And so it’s completely independent, I would say, and this is fine. There is actually no problem to use NDI instead of SEI. The same way as we don’t deal with the video on SEI ourselves, we don’t deal with the NDI. There is going to be a transmitter receiver. And we are completely on the side of that doing the, the paint and the camera [inaudible 01:20:55]. So yeah, there was absolutely no problem to use that with NDI. There’s nothing special to do. You just, it’s actually easier in a way that you already have the IP networks with everything.
Jim Jachetta (01:21:08):
Right. If you have an NDI set up, you’re already in the IP domain. So adding Scion view is a no brainer then at that point.
David Bourgeois (01:21:18):
Jim Jachetta (01:21:19):
Okay. Do you have some slides to show us about the REO, what the differences of the REO compared to the CIO?
David Bourgeois (01:21:30):
Jim Jachetta (01:21:30):
I’m looking after that.
David Bourgeois (01:21:32):
I will maybe just to show you as well. I mean, I have a stream deck here. It’s part of the many things that we do support as integrations. And so you see, I have the black magic Hooter over there. If I had the black magic panel and I switched to the…So this is actually the, that’s a blackmagic URSA studio and broadcast cameras that I’m using to, as the main angle. But so you see, I did nothing else than select than camera. If I select camera one one, then I can change the gain of that camera. So, that’s the black magic micro studio. If I select camera three, I’m on the Marshal. And if I just paint that on the RCP, now I’m controlling this one. Sorry, that’s the Panasonic PTZ. Now if I do exactly the same on the stream deck, you’ll see that it’s actually switching the video hub.
David Bourgeois (01:22:23):
So the RCP will switch that. I can switch between multiple cameras on the RCP. I can do exactly the same on the stream deck. I switch to camera three. So now I can paint that camera. I can switch to camera four with this button. I can paint this one. I switch to the Marshal and I can paint it. So I have nothing to do on the RCP, keep my hand on the knobs to paint it. And then I use the panel of the router or a stream deck on the side, or we also have a switcher interface on the software that you could put on a phone or tablet. So there are many ways to switch cameras. And even as I said on the RCP itself. So on the RCP, I can paint that one, switch and paint another one. It’s very quick actually to handle all that.
Jim Jachetta (01:23:10):
So, I’m thinking Saddleback Church where I volunteer, they have a few specialty cameras on stage. I think they need a Scion view RCP and a couple of CIOs to be able to paint those cameras. So get them to match. The director won’t take camera feeds if the video doesn’t match the other feeds. They just won’t use them in the production or it’s too obvious that you’re going to…If the pylon camera doesn’t match the main stationary cameras, it looks awkward. It looks out of place.
David Bourgeois (01:23:51):
And that was the starting point. I mean, you’re talking about the pylons, but it’s all, I mean, the big productions where they’re using many mini cameras, that’s where they are struggling to get control on that and so on. And this is where we said, we need to do something good about that. And that was a starting point. We kept adding, but the three that is. If I take your, for example, the Panasonic PTZ here, as I said, the next point for us, that now you can move the camera. So you see I’m moving the camera here with the touch screen. I can also use the encoders. I can basically put in a certain position and save that. So I will save this position and now I would click on another button. It’s actually going. So I made a position earlier on the core chart [crosstalk 01:24:40] and I can record different positions from the RCP, so.
Jim Jachetta (01:24:44):
So you don’t need a joystick. You can control the PTZ cameras, right from the touchscreen?
David Bourgeois (01:24:50):
Yeah, indeed you don’t need. And in some situations, you actually don’t really need it in a way that if you’re using the camera for framing, you can do the framing from here, or maybe moving between two positions. That’s very quickly.
Jim Jachetta (01:25:03):
[crosstalk 01:25:06]. So you’re not panning it while it’s live. You’re just jumping from preset to preset and you frame it before the production.
David Bourgeois (01:25:15):
Yeah. What you want to be able to do is to do the tracking. I would say that’s not, but in that case, we do support a couple of joystick options. I mean, the VISCA ones, we have a USB joystick that is inexpensive and pretty decent that you plug as well at the back of the RCP. It’s USB and then you have a joystick that you can use to just move the camera that you select it. I would say on the bigger production, I mean the productions that would have multiple…I mean, if they have a joystick panel on the side, like Sony, Panasonic, Burdoch, they can still…It’s IP, so they still use it the way they do today. And they do the shading from the RCP. They can still move from the RCP, but that would be done from the joystick panel. So [crosstalk 01:25:59]
Jim Jachetta (01:25:59):
Yeah. I mean, in a smaller production, the video engineer might be doing the shading and running the PTZ cameras, but in a bigger production, it would be a camera operator that would be moving the PTZ. And the video engineer does the shading, different functions.
David Bourgeois (01:26:16):
Exactly right. So that’s the way it works. And so when you have many PTZ, you are going to have multiple camera operators. And on the RCP side, you might have just one or two video engineers. It depends if the conditions change a lot or not. And again, this is what you can easily do with our system. I said that on the Panasonic, I mean, you can also go to the menu there. If you see, you, you can access all the things. So that’s, it’s a gain for all cameras. Yeah, so that’s for IP cameras. I would say it’s, it says easy as that. I’m not going to…We’ll have an ENG right after with the RIO, but all those kind of ENG style cameras or the cinema cameras are also available. You [crosstalk 01:27:12]
Jim Jachetta (01:27:11):
You said [AK Gammy 01:27:14] and Grass Valley are coming?
David Bourgeois (01:27:16):
Exactly, for the ENG. Those are the ones that…And where it comes with RIO where it makes sense. If you were on the local network and you had those RCP from AK Gammy, from Grass Valley and Sony, we have no added value. I mean, the people are not waiting from our system to help them. But you’re talking about remote production and you actually just want to have the camera operator with the camera ahead onsite and nothing else. This is where it makes sense to integrate these cameras as well with our technology. And so that brings us to RIO. This slide here, we went through that. So I explained a bit of everything. What I didn’t show is the color corrector that we have here on the slide, the VP four which is…
Jim Jachetta (01:28:06):
So this is the Scion view color corrector? It’s your own color corrector.
David Bourgeois (01:28:14):
Exactly. So the reason we did I mean, we control the ones that are available, as I say, for AJA sorry, I forgot about that, but level. But we..The reason we developed our own is actually to be able to match cameras together. So, as I said earlier, you can still paint the blacks, the whites, the gamma, the saturation and the standard video processor using the Proc Amp. But as soon as you want to match cameras, that actually don’t look together because they don’t have the same sensor or the same brands and model, that’s where it doesn’t work anymore.
David Bourgeois (01:28:54):
And I can take actually this Panasonic camera here, if I can point it to the core chart. I have actually seen in the configuration. I can show you quickly on the, there’s camera seven and eight. The camera seven is the GoPro I have on the RCP just to take, I mean, the view of the RCP and camera eight is post-processing. I did not have control, but I did add a video channel on the VP four on that camera eight. You can actually combine camera control and video processing. The way it works is that if for example, gain is available in the camera head, the gain will control the head, but if the camera doesn’t have black control, then when you change the black, it’s going to be done by the video processor. So it’s really a point of having-
Jim Jachetta (01:29:50):
[Crosstalk 01:29:50] you can configure that. So, if color balance is not possible, just you can do color and black. It gangs the functions together, whether it’s in the camera or in the Scion view video corrector?
David Bourgeois (01:30:06):
Exactly, and that would work as well. I mean, with AGA for all those other color correctors. The reason we did that is actually when the camera is lacking black control, I mean, some of the older Marshalls had that, that’s the reason where when people are going to add a color corrector as they say, I need this control. If I don’t have black control, it doesn’t match. So they add the color corrector to just apply that. But we say it doesn’t make sense to have two different cameras on the RCP. We can have the same and merge these controls. What’s available in the head will be applied in the head. What’s not available in the head, but available in the color corrector will be done there.
Jim Jachetta (01:30:44):
That’s very cool.
David Bourgeois (01:30:45):
And so here, I actually just made a camera. I said, there is no camera I had to control, so I have all of the settings available from the color corrector. And so this is, you see that on the…I mean, this morning, you see that on the main monitor? What do we have here? So the same thing. I have seen files. I can save load as many files as I want. So I will just start by just…We’re setting this to default. So that’s transparent. I can change the gain. I can change the blacks, the black balance. I will just put that completely word and I have access to the gamma. And then I can go to my scene files and low default again, and that is going to reset everything. There is a little bit of delay, but I guess you managed to follow. So those are the corrections that you have in a standard color corrector.
David Bourgeois (01:31:43):
So nothing different in here. Now, if you want to match different cameras, this is where we had to add a couple of things. And so we see that in the paint one, for example, we have gamma just like the other ones, but we have black gamma. So you can turn the black gamma on, put that to the maximum. And if you go with the on/of, I’m not sure with the delay, you will follow, but you see the difference there. Saturation detail is definitely not normally available in some color correctors. So we can add more detail in the picture. But I mean, I will add…I will show the main feature. So we have a multi matrix there and I have a gain effect. So now you see the picture is black and white, except one color. And I can change and can pick another color.
David Bourgeois (01:32:29):
So let’s say, I want to change to green. The green of the grass is not matching the domain cameras. You have a mini cam in the goal and the green is not matching that. So what are you going to do? I have the gate effect. I select the green. I can put more saturation. I can put you, I mean, let’s say it’s not yeah, this is going to be another green, but just to show the difference. And if I remove the gate effect here, you see that the green has been changed. If I turn on and off, you are going to see that it’s only the green that are going to be affected. I should basically for the next demo, add a vector scope because you will see exactly the colors that we take and that we move.
Jim Jachetta (01:33:12):
This is a homemade way of doing a vector scope, just isolating the green.
David Bourgeois (01:33:20):
Yeah, exactly. This is what we did there. Maybe I’m not sure if you will be able to see it there on the… Well, it’s not the most, but if I turn on and off, you will see that only some parts are going to move and the other parts are not going to be affected. And so-
Jim Jachetta (01:33:42):
When I was being…At first, I was trained as a camera operator at Saddleback Church. And I go into the control room and I’m like, “Hey, you have a WFM, a Tektronix wave for vector scope monitoring.” And they’re like, “You know what a vector scope is? You know what a wave monitor is? You’re a video engineer now.” So I got promoted from camera operator to video and well, not that camera operators have to frame the shot properly, but maybe my video engineering skills were better than my camera skills. Maybe I didn’t know how to frame the shots properly. So they quickly switched me to video engineer.
David Bourgeois (01:34:20):
Yeah. So you get the idea. I mean, I can take the GoPro that he’s shooting the CPR right now. I’m not sure if we are going to see the chart there. And so, that’s-
Jim Jachetta (01:34:27):
Yeah, there’s a little piece of it. Yeah. So you could ship it right from there.
David Bourgeois (01:34:32):
And again, so the GoPro, I have no control at all on that. I mean, that might come in the future, but right now we don’t. We did a proof of concept of that on Ninja Warrior but…And so I mean, if the director of the GoPro doesn’t match, but I still want to use that or an iPhone with NDI, you say you have an iPhone and you get the signal at your facilities remotely. Now you can pick that thread and you can add more saturation on this. You can change it to the one you like. So let’s say, I want to have a bit of that. And so you see now the thread completely changed and I can now merge the other cameras.
David Bourgeois (01:35:12):
And so that is the main reason we did these developments there with our own video corrector. It’s not really to…We’re not competing with the other ones. If you’re happy with an AJA FS4 and that price points to…There’s nothing else. I mean, you, you don’t need it. If you have all the cameras that are the same, they’re all going to match somehow. But whenever you’re going to mix different brands and models, they are not going to match that’s the nightmare of a video engineer.
Jim Jachetta (01:35:40):
Even if you’re using a bunch of GoPros and they matched, you may not be happy. The grass might be yellow. You still may need to correct, to [crosstalk 01:35:51]
David Bourgeois (01:35:50):
Or drone, I mean today you have a drone camera. You want to match that with the other ones and it’s easy with the multi matrix. You’ve picked the color that doesn’t match. As I said, the green of the grass, the white of the shirts, concerts, if you put a GoPro on a piano and you have I mean, the pianist might have a red dress and it doesn’t match the other ones. That’s definitely where it helps to quickly add support on that. And also, I mean, we’re, let’s move to RIO right now, which is the remote production part. So the VP4 form is a nice picture. I can tell you more core gym, and if you need more information about that, but that’s the four channels in and out. And in the RIO the goal of that is matching.
David Bourgeois (01:36:40):
Now, I want to bridge that with RIO, which is for remote production. So, everything that they explained here on the roll call set up, we want to be able to do that over the internet. And internet means basically latency on the video, because you need big video buffers to be able to make sure that the video is not going to break up. So you might end up with a 500 milliseconds to 10 seconds of latency. Quickly the bridge there that I make with the video processor, you might see that in the slides after, but is the fact that the cameras that we are going to use remotely, if I’m talking about an ENG camera I might not have ordered the processing built-in to be able to match another type of camera.
David Bourgeois (01:37:29):
So you might still be happy to have the VP4. The VP4, if you put that, not on the camera side, but on your studio, your facility, the master control room, what is different is that you won’t have latency. Let’s say you want to adjust the black, the white balance because that’s applied in the studio, that’s after the latency on the video. So it’s, if you want to change the Iris, there is no way you have to do that on the camera side, and you will have the latency of the video. But if you want to do some fine tune, I mean, some small adjustments afterwards using a VP4 in post-production is actually a very good solution. The people that tried it definitely saw the advantage of that, is that you have a feeling of having a base station, a CCU, you fine tune the white balance, you have the immediate feedback. You don’t have to wait that one or two seconds. So it’s still good to have camera control, but it’s like another layer of, I mean, adding value into this remote production thing.
David Bourgeois (01:38:37):
So remote production, what is it? This is controlling the cameras over the internet. And so over the internet, over a latency network. So we still have the RCP. We have a network in between, and on the left side, we have the RIO unit. The RIO is like the CIO, but extended to have its own software. There is, I mean, all the software runs on the RIO as well. And the reason for that is that if we have latency between the RIO and RCP, then it doesn’t matter. We deal with the latency. The camera is only talking to RIO. It’s not talking to the RCP. So if we want to add gain to the camera, the RCP is going to send a single command, say, increase the gain by 3db. And then the RIO will just change that to whatever the camera is talking. So to the camera protocol.
David Bourgeois (01:39:29):
And so if you need a lot of, let’s say, you need to change 10 settings to be able to apply one, you can do that instantly. You don’t have the latency that you might have between the RIO and the RCP. Here, I’m talking about data latency. So the control latency you might have, which is not 500 milliseconds to 1 or 2 seconds. It’s smaller than this. We’re talking about 10 milliseconds to 100 milliseconds, I would say in the worst case. But even though that sounds like I can deal with that, it doesn’t matter if I have 10 or 100 milliseconds delay on my settings, it’s not true. Some of the protocols will just give up after a certain amount of latency.
Jim Jachetta (01:40:17):
It’ll time out. Like you said earlier, if the camera can’t talk to the RCP or the CCU, it’ll time out, it’ll fall apart. So you were telling me yesterday David, that the RIO actually is its own RCP or the RIO is kind of emulating or spoofing the RCP. Some of our other vendors do something similar, like our wireless partner ABonAir. They spoof the controller. So the camera thinks the RCP is right there. It thinks the RIO is the RCP, correct?
David Bourgeois (01:40:51):
Exactly. So, in that case-
Jim Jachetta (01:40:53):
Yeah. And that’s part of the magic.
David Bourgeois (01:40:55):
In that case, all the protocols that we support will still work over the RIO because again, what we sent from the RCP to RIO is this, apply 3db of gain or move to the right that the pan tilt had, PTZ camera. And then it doesn’t matter which protocol we use on the other end. It will still work. We are basically dealing with the latency network on our side, so that it’s completely independent of what protocol the camera is using, or other kinds of equipment. We want to switch a router remotely. We can do that. We want to move a gimbal, we…All those kinds of things are basically built into the system. And because the main software we developed in the RCP to control everything is the same running on the RIO.
David Bourgeois (01:41:44):
And so what’s can we have in between? We can have a direct internet connection. We can have wifi. We can have the internet. We can have cellular 4G LTE. It doesn’t matter, 2G might still work because the data bandwidth is very small compared to video. It’s nothing. Many that find that today that’s 4G or something like this. In the future we might have also some very, very low bandwidth solutions to go over RF modems, but that’s definitely not a short-term goal for us. It’s mainly to make the remote productions start working remotely. And when we say control, everything that we control today will be available. And on the left side, I didn’t talk much. I mean, we have no time for that, but we support lenses.
David Bourgeois (01:42:32):
When I say lenses and ENG does already do that, but let’s say you take a digital cinema camera like a Canon C200, then you put an ENG lens on that because you need that for your shots. The Canon camera is not going to control it, but the RIO has two ports and we can control the camera with one port and the ENG lens with the second port, or some people do put that on a drone or on a robotic system where they have a mini cam and the ENG because they need that big zoom range. And in that case, we support all that. Again, if you have a gimbal, we can control that as well, all over the internet. Typical applications, and that’s where you’re involved as well with AVIWEST is doing remote production of sports, for example, or if you have multi cams, that’s definitely better when you can match them together.
Jim Jachetta (01:43:28):
Right. You can see there, David, in his slide, he’s got a little USB 4G modem. The PGA has been using the Scion view, RIO and RCP. One of the achilles heels of using a single modem, you mentioned like JVC. JVC has the ability to put a cellular modem on some of their cameras for video and control. But the problem with a single modem is if you lose connection, if you…I believe we’ve been configuring these modems with the PGA at AT&T. You hit an AT&T dead spot, you lose camera control. So what we’re going to show you in a little bit is VidOvation, Scion view and AVIWEST are bonded cellular partner. We’ve integrated now where we can do a bonded data connection to connect the RIO to the RCP through a bonded cellular data connection.
David Bourgeois (01:44:38):
Indeed, I mean, perfectly right there. There is a lot of situations where that actually works fine, but if you’re mobile and you move around, I mean, sometimes losing control is not the end of the world. In some productions it actually is. I mean, they need that all the time. So it really depends on that but the need to have a bonding for data and
David Bourgeois (01:45:00):
…fully depends on that, but the needs to have bonding for data and actually to be able to use the same wire, I would say as the video, but because if you lose video anyway, you don’t take control anymore. So it makes a lot of sense.
Jim Jachetta (01:45:14):
Right, right, right. I mean, there are workflow issues when you’re doing Bonded cellular. We’re doing a lot of at-home production, Remi production, where they just send the camera operators out and everything is produced back at master control. But the video coming in will have anywhere from 800 milliseconds or eight tenths of a second. And if the cellular network is struggling up to three seconds, so the camera operator will see an image on the screen, but it might be a second old. So they have to make some small adjustments not to overshoot. It’s part of the workflow, there’s pluses and minuses of doing at home production over cellular. It might, introduce latencies in the video, but you just have to be aware of it and, and adjust your workflow accordingly.
David Bourgeois (01:46:13):
So you see here, I mean, we can also not only handle the cameras, but so if you want to put a beauty shots, a mini cam on, [inaudible 01:46:21] , this can be done as well. Just to finish that part. This slide actually just splits that into, on the left. It’s exactly what we explained. You have a camera operator with this camera and a cellular device to send a video, you and our system. And then that’s basically adds camera control. That’s all you need. Now, there are some other kinds of Remy style productions, which it’s not only camera operators, they’re shipping minimum amount of equipment, but it’s still like a lightweight for production. It can be a flight back minivan with equipment there, but you might not have all your operators on site. In that case, it might still be interesting to have control remotely because some of your crew, I mean, the shading might be done separate, I mean, decently or the camera operation, the camera movements on PTC.
David Bourgeois (01:47:18):
So what’s, we are developing now is a fully distributed solution that I explained earlier is the fact that you can have a small production, the local production with an RCP and different venues with different small productions, and still be able to control that from a gallery, from a studio, or again, put the same RCP at different places at home. You might have your RCP at one place, but the switcher, the video switcher might be at another one where you’re going to interface the tally signals and all that. So again, with that fully distributed solution, I would say we don’t know today, any kind of the workflows that we will not be able to support.
Jim Jachetta (01:48:01):
You were telling me yesterday, David, that it would be common that you’d need an RCP at the venue. A technician who’s setting up the cameras might want to do a rough paint, adjust the blacks and the whites locally, and then hand off to the operator working remotely or working from home, working at home or back in the studio.
David Bourgeois (01:48:27):
Yeah. I mean, let’s say that you have multiple video operators doing a shading during the events, but you don’t want to send all of them on site. In that case, you still, are going to have one RCP so that someone might during the day shade all the cameras to put them in the right ballpark, make sure that everything’s working. And then in that case, when the event’s going to start, you can take the control remotely and have multiple operators that will do the shading of that event. You’ve seen the slide before that, that was the RIO and then the couple of ports, but you can connect multiple cameras on RIO . We do support tally whenever it’s built into camera, some ENG camcorders do support it. Some don’t because actually camcorders was not necessarily made to be a live camera and it was made to be able to record.
David Bourgeois (01:49:22):
And so, you a red LED, but it’s actually a record light and not a tally light. So some of times you can just not control it. In which case you just add a tally LED on the side, on the second connector, and you put that in the viewfinder and you still work. We have a picture there on the Sony Fs5. We did manage to get tally working on that one. So the [inaudible 01:49:46] tally works, but if you need an LED on the side, you can do it. We have tally on that Panasonic PTZ or on some mini cams and to ingest tally that can be done from a dongle on the RCP, or it can be done by the Neo that you see there just a small ethernet device. You plug it on the network, and then you connect your 16 inputs. We are going to support more protocols like TSL and so on.
Jim Jachetta (01:50:13):
So you got GPIO, TSL, switcher protocols, all those protocols you can use for control and fatality, et cetera,
David Bourgeois (01:50:25):
Actually just like everything. We keep developing things that are necessary. So whenever we come to a situation where we understand the how… What’s the best way to do that, that’s where we were doing the development. And we did quite a lot. I mean, so [inaudible 01:50:41] is not a quite well-known in some areas, but we have yet to start. I mean, working a lot on our marketing, we basically worked a lot on the RND, developing all those functionalities, make sure that people are happy with that. It’s becoming nature now. So definitely this is going to spread and the further we go the better. We understand various response, various workflows on big production, small productions, remote productions, and we are keeping added features. So if we maybe just to do a last demo, I will just put that aside. I do have a Sony camera here from a customer that’s so landed that to make the integration. It’s a Sony camera. I will turn it on.
David Bourgeois (01:51:37):
And If you.. I’m going to do the demo here, normally the network works. Sometimes it drops, but I have that force. You dongle with the same core insights. There is nothing else connected to that camera. So you see, I have just one while you’re here, it’s actually taking power the other way around sticking power from the camera. So the device is actually powered and it’s going to control the camera. So just see, I am a bit too late, but if boot it. You have that icon in the middle that says that it’s actually connected to our server on the internet. Okay.
Jim Jachetta (01:52:24):
I see that it’s got the little router with the wifi, the wireless-
David Bourgeois (01:52:28):
Exactly. So if we go back to the slides, we have a relay on the internet. You don’t need to do that. You can open ports, make a direct connection. I have a tunnel, like we’re going to have to explain with AVIWEST. But if you just have an internet access, we have a relay, which means that you… This is actually the case. Your CP is connected on a switch on the internet. I did nothing. And then that one goes internet access through the 4G dongle. And-
Jim Jachetta (01:52:55):
It auto connects through the cloud relay.
David Bourgeois (01:52:59):
Exactly total connects. I mean, that’s been configured beforehand saying that this is my RIO and the RCP and the RIO should match together. But once this is done, it’ll automatically connect. And so you see that the big number is there. The camera is green on both sides because the control has been there. If I now disconnect the internet connection, I’m going to lose. We can do that after when I will have the demo, I will turn off the camera. You will see that the green icon there it goes away and so on. So now this is my camera nine sorry, my Sony camera here. The same applies. I have the green icon. So you see the picture there is blown out. I have our risk control on this camera.
David Bourgeois (01:53:50):
And the same way I did earlier with the configuration, I’m going to check. But, I do believe that you still configured there. I… It’s the tally on… I have the tally here configured on the second port of RIO. So if I put my LED here on the second port, it might light up as well as the other ones. And I can use the… So I did put the LED here on the second part of the RIO, and I can use the RCP right now to just emulate to tally, turn that on. And you see that now I have the tally on the front of the camera here. I have the tally on this LED. I have the tally on the LED of the RIO unit here and well. You don’t see, but in the few finder it’s on as well on the camera.
David Bourgeois (01:54:50):
So tally, I have camera control. So you see the arrays changing. I didn’t explain, but whenever we have arrays, the mode Button here is exposure, because a lot of those mini cams don’t have arrays. So whenever you change, the exposure is going to change the arrays on this one. If I take a [inaudible 01:55:11] it’s actually, I have no arrays. So it’s actually going to change the game. But for me, it’s still exposure. I can expose yards to the Sony. I can expose to [inaudible 01:55:19] . I don’t have to switch between arrays and game. And also we have some requests of people asking for a joystick on… An arrays joystick makes a lot of sense. If we have one RCP for one camera and we’re working on something like this, but whenever you have multiple cameras on one RCP, then the joystick doesn’t work anymore.
David Bourgeois (01:55:40):
You are not going to open and then because it’s getting dark, you need to open all cameras. We’re going to open and go to the second. And then you are at the end a while here. It’s so actually easy to keep your hand on that and switch cameras and put a bit more gander, put a bit open that one and so on. It doesn’t need an array. It’s very efficient that way you have your [inaudible 01:55:59] control your exposure. For these kinds of cameras. It’s, it does definitely, actually faster doing that. As I said, it makes sense to use… That’s when it makes sense to use a joystick, when you will have four RCP and you put your two hands and you try to shoot four cameras at the same time simultaneously but when you are your going to do them one by one. This solution is actually very, very working very well.
Jim Jachetta (01:56:23):
I am not good enough to shade multiple cameras at the same time. I… At Saddleback, I have a multi-viewer above me. I cycle through the camera feeds to check them individually I switch, or I see on the multi-viewer or the director will tell me, camera four is overexposed. I’m convinced the director always wants a camera that I’m not ready with. The one that’s overexposed is the one he always wants.
David Bourgeois (01:56:52):
I know the feeling. We actually… I have a feature that we thought about that would be to have a kind of tracking automatically tracking on the RCP, the camera that would be on preview, for example, or you could have a button or turn that on and off so that you, if you press on preview here, because I don’t need my preview button here, that would automatically call the camera, which is on preview.
Jim Jachetta (01:57:19):
I would like that. So, I don’t need… So what I would do, I would manually track what’s on preview check that first, but I’m also listening to directors right next to me, but I’m listening to on the calm, he’ll be like, ready to shoot. He’s going to go to two. Let me check that. Wait, ready? Three, wait, I thought he was going to do two ready three. No, ready four, and then all of a sudden going to two. What you were on two, three, four. So, you get used to your director, how quickly they call their shots, but-
David Bourgeois (01:57:53):
No, I mean, definitely. So that’s, I think that’s it, we have no latency here on the video. You have the webinar, but as I said, if you have latency and it’s difficult to do some very small adjustments, or if you have multiple cameras that actually don’t match because there are not the same, having a VP4 there on the same set up might make sense because in that case, whatever is missing in the camera. You can still have it. And for your final adjustments, you could decide to do that in post. Now there’s one point, and maybe you’re going to expand that talking about AVIWEST But we work with, AVIWEST to make sure that that our solution would go for their data to know. And, so the way it works is that you have the RCP on that side.
David Bourgeois (01:58:48):
Instead of having an internet connection, the AVIWEST system is acting like a gateway and we’ll route all packets from one side to the other. And in that case, we can reach a camera. We can reach a regular unit to control multiple cameras on the other end. More and more of those systems, not only have one feed, but they might have four feeds. So it’s possible that you might have a small wired, local connection production with four cameras, and you just need one array on that side to control the four cameras. And I have to do one and two here, usually we’re going to use the RIO. That means that the RCP is going to talk to RIO, which is going to convert them to the protocol used by a sorrel camera or an IP camera. This is the way that we can make sure that it works.
David Bourgeois (01:59:38):
But sometimes depending on the latency, depending on the camera, if you have an IP camera, it might directly work from the RCP to the camera. And it works at various degrees of quality I would say. You still can get control. It will become steppy. Let’s say, instead of having like a hundred different values of arrays per second, you will end up with maybe 10 or two. In that case, you will have big jumps. When the latency increase, some cameras will stop working. Some will just drop and come back. A lot of things happens, but some might just work. And in that case, you don’t even need the real. If you have a data breach, you can directly talk to a camera. But when you never know how the latency is going to evolve, if you start at 12 milliseconds, but suddenly it end up 60, and it doesn’t work anymore, having a reorder, it’s a way to make sure that it’s not affected by the latency at all. You might have 500 milliseconds latency and it still works.
Jim Jachetta (02:00:40):
So the AVIWEST, the data bridge or data tunnel, what it does is, it establishes a connection from the studio side to the field. You can configure it. So they’re on the same sub-net. So that, that assets in the field appear on the network in the studio, like they’re on the same land, or we can do different subnets. And sometimes that makes sense. You might want, cameras in different areas in the remote site on different subnets. So let me take control.
David Bourgeois (02:01:13):
Jim Jachetta (02:01:14):
Let me see here. Okay. Bear with me that should be it. Do you see?
David Bourgeois (02:01:34):
Jim Jachetta (02:01:35):
You see the screen. Okay, good. So VidOvation where the master distributor for a very popular, bonded cellular brand AVIWEST, it’s David’s neighbor in France. So we’ve been doing a lot of at-home production, Remi production, whatever an acronym you use for it. We presented a lot of user case studies with sports video group. We’re working very closely with the PGA, VidOvation, AVIWEST and CyonView. We’ve been working closely with the PGA. We’ve done projects with Turner sports. I like to say VidOvation was instrumental in creating a new category of a live reality TV, particularly shot a Remy style at-home production live rescue, live PD. Live PD, we’re hoping we’ll come back soon. So live sports, live news, live entertainment, but particularly multi-camera bonded cellular was really made to shoot a news event, shoot a reporter on a courthouse steps with one camera.
Jim Jachetta (02:03:03):
Shading becomes less of a problem when you’re only dealing with one camera or you put the camera on auto, where AVIWEST really differentiates is for multi-camera production. And what makes AVIWEST special is with their SST safe streams, transport. They’re able to maintain frame accurate gen lock across dozens of cameras, not two cameras, four cameras. We have some productions that are using as many as 40 or 50 cameras, and they all are in perfect gen lock and frame accurate and lip sync with each other. So I’ve spoken about this before, but why do we want to do an at-home production or a Remi production? Well, particularly with… Before COVID it was a good idea to save money. You didn’t have to send your skilled workers, your video engineers, your operators, put them on planes, pay for food, pay for travel. You need fewer people now with COVID and social distancing.
Jim Jachetta (02:04:08):
You may not be allowed to have more than a certain number of people on the course or in the production or in the truck. So it saves on personnel, saves on time and most of our customers, I don’t know how it is in Belgium, David, but most of our customers here in the us saving money is a big incentive. So here’s some, I know we’re going on more than 90 minutes here. So I’ll make this quick, here are some of the architectures where we bring all the video feeds all the ISO camera feeds back to master control. There’s typologies, where they’ll switch the show with a smaller truck onsite and then send the switched production back to master control. They could be shading, the cameras locally or remotely. They’re… Many of our customers are doing a hybrid approach where they don’t send everyone onsite.
Jim Jachetta (02:05:08):
Maybe they send a half the crew instead of sending three trucks, they send one smaller truck. We leave it up to you. The customers, the production people, to figure out the topology, the configuration that works for you. We work with many customers that don’t have a control room. We do the whole production in the cloud. The show has switched in the cloud. We use cloud production tools. So whatever the configuration we can help you. And this is the magic sorce of AVIWEST. This SST, the Safe streams transport, it bonds or aggregates the networks together up to 11 or 12 connections, eight cellular to land multiple wifi. It does ARQ or packet retransmission, high quality for an error correction, load balancing, et cetera. You can even prioritize a network connection.
Jim Jachetta (02:06:09):
So if somebody gives you a free internet connection, but you don’t trust it, you can set that as high priority because it’s free and the cellular has low priority because you’re paying for that. And the unit will… It’s never a good idea to turn off your cellular modems, because if you need them, they won’t spin up fast enough. It might take 30 or 60 seconds, or an operator has to be there to turn them on. So we prefer you put them on low latency, and then they’ll kick in. If the primary connection should fail. Here’s the slide. We talked about this in some of David’s slides that the AVIWEST stream hub, where their receiver acts as the gateway, not only to receive the video, but to route the control packets with the CyonView system, to route it out to the assets out in the field. So you can see here, PTZ cameras in the field, AVIWEST has their pro serious. Those Mount on the camera.
Jim Jachetta (02:07:11):
AVIWEST was the first bonded side of the provider to put the unit on the camera. They didn’t invent that idea. I think the microwave industry coined was the first to do that, putting a transmitter between the camera and the battery. So AVIWEST was the first to do that. They have rack mounted options to go in your truck or in your master control. And then they have a nice small air, which you can put on your belt with a clip or in a, in a small little shoulder bag. Here’s kind of close up how to hook up the unit. So if you’re using the data bridge functionality to say, do camera control with, with David’s product, you’ve got the blue ethernet connection. Your video would go to over HDMI or SDI, and all of this can be transported at the same time. So we can go live HDMI or live SDI video concurrently with the data bridge and camera control and Intercom.
Jim Jachetta (02:08:17):
So we can do all of this, all these functions simultaneously. This is just a quick slide on how to turn it on. You hit the little cloud, you turn it on, you see your data rate. You can see in the unit there’s icons to show you that you have the data bridge or the cloud connection enabled this slide shows a typical configuration. So the CyonView gear, the controller would be here on the right on the studio side. And you can see in this case, it’s on sub-net 10, then the assets in the field, we actually have on three different subnets, A B, and C, and you can see sub-net one subnet two, and sub-net three. And the traffic it gets routed through the Stream hub. The Stream hub is used as a gateway to route the traffic and the AVIWEST technology, and the CyonView technology work very well.
Jim Jachetta (02:09:15):
And, and VidOvation is representing both AVIWEST and CyonView here in the U S so we can help you with your configurations. We have David and his team at CyonView as backup tech support. We have 24 seven support from VidOvation and from AVIWEST to make all these things work. You can see here on the stream hub sites to the little cloud icon that tells you have a cloud connection. One common question is, well, if I do the data bridge in camera control, will that impede my video transmission? Well the AVIWEST does give the video connection priority, but it does allow a couple of hundred, I believe, 300 kilobits per second, minimum throughput for control, which in most cases is enough about, cause we don’t want to steal too much bandwidth from the video. The video is our main priority.
Jim Jachetta (02:10:16):
And just, this is just some more slides of showing you what you can see in the… Well, I think what this slide is showing that if you turn the video off, you’d get a fatter pipe. More of the data connection will… More of the available bit rate or all of the available bit rate will go towards the data bridge. But then when you turn the live video on, that gets priority. Here’s another drawing showing, I guess, for simplicity of configuration, David, maybe you could speak to this using the RCP with the CyonView gateway instead of using AVIWEST stream hub, as the gateway, is there… It’s just an ultimate configuration or is there a reason why you might want to do that?
David Bourgeois (02:11:10):
No, no. I mean the gateway is one of the possibilities, but it’s not a gateway in an IP way. I think that this slide should be a revised because normally it should be the review on the left and the gateway is completely optional. The gateway has been developed at first when we need, it’s really a lot of cameras on a single production. I would say here for remote production, or you’re talking about four to eight cameras. You don’t need a gateway today. The gateway is a way also to rack that’s in a truck and have multiple RCP for multiple operators. In that case, that is really the gateway today to have a multiple operator or multi operator workflow, when you might have multiple operators share sharing the same cameras. And so on in that case, it’s another layer we didn’t talk about of complexity sharing. I would say in a simple diagram like this, you don’t need a gateway the RCP is going to use the AVIWEST system as the IP gateway to reach the RIO on the other end.
Jim Jachetta (02:12:15):
I’ll get Ronin and Samuel to correct the slide. So yeah, on the left, as you say, the CIO is just sear the IP converter, but it doesn’t have the intelligence to go through an unmanaged network like cellular or the internet. So as you said, David, that should be a real there on the left. And the gateway is really more of an option for a more complicated configuration. Okay. Yeah. So, this is some of the benefits, as I mentioned, we’ve done a numerous at-home productions here in the U S Turner sports rider, championship leagues, PGA. We’ve done some live productions with A&E Fox, discovery, live rescue, live PD. First responders live. Here’s some more customers we’ve worked with. And, and this is just a small sampling, NBC, NBC sports, Golf channel, Sky sports, Golf TV, Golfpass, PGA and Twitter, I should say. Also the AVIWEST technology has won multiple Emmy awards. I got the little Emmy there at the bottom right. We got to work on getting you an Emmy David. Your technology is definitely Emmy worthy. I know some people, I got some friends.
Jim Jachetta (02:13:43):
Here’s a closer look of the units. So you see here top left, traditionally AVIWEST has always mounted their units on the camera. So if you’re using a larger camera that uses an Anton Bauer or V-Loc battery, you can Mount it on the camera. And then below that you can put it into a compact, a little backpack. If you’re using a camera that does not have an external battery. And then the pro three, then on the right, we have the air and there’s a little pouch that it goes, and you can wear that over your shoulder or some customers similar to mounting the CyonView on the camera. You can Mount them together on the camera. The air has a quarter 20 Mt to put it on a shoe, accessory shoe on the camera. So let me just jump ahead.
Jim Jachetta (02:14:36):
Another, a key feature of the AVIWEST tech that the PGA and others live PD have found very useful is analog inputs. So like camera control being very important and there’s some shortcomings out there that CyonView helps. I think of myself, more of a video guy, but you can’t do video without audio and with live PD and PGA, having analog audio inputs on the bonded cellular opened up some great possibilities that they could use a shotgun mic on the camera, but then have lapel mics on the talent, on the commentator that are then fed into these analog audio inputs to give us better audio options. So here’s a little bit about VidOvation. We encourage you to reach out to our team. We offer many professional services, including consulting, design and engineering systems, integration project management, warranty, and support. Here are some of the customers we’ve worked with.
Jim Jachetta (02:15:43):
We represent a lot of great brands like CyonView, like AVIWEST AB on air for microwave, et cetera. So we’d love to hear from you folks. If you’d like to get in touch with David, reach out to the VidOvation team. If you have any questions or comments, I think we have, I actually have a conference call at noon david. We have a few minutes. We can… I’ll check the chat in a second to see if there’s any questions, but you can reach me personally firstname.lastname@example.org or call VidOvation at (949) 777-5435. We would love to hear from you. Let me just take a look, see if there’s any questions. I think we covered everything. Well, thank you so much, David. Thank you for being with us today. It was truly an honor to have you. We may have to take this video in divided up into maybe two parts because went maybe about two hours, but I thank you for the great content and the great knowledge you’ve laid on us today. Have a good night and thank you so much.
David Bourgeois (02:16:56):
Thank you for having me.
Jim Jachetta (02:16:58):
Take care, take care.
David Bourgeois (02:16:59):
Jim Jachetta (02:16:59):
We’ll talk to you soon. Bye-bye.
David Bourgeois (02:17:01):
Podcast: Play in new window | Download (Duration: 1:51:11 — 101.8MB) | Embed
Subscribe: Google Podcasts | Email | RSS
Podcast (video): Play in new window | Download (Duration: 1:51:11 — 2.5GB) | Embed
Subscribe: Google Podcasts | Email | RSS