Join Jim Jachetta CTO of VidOvation and Jesse Forster Director of Product Development & Western Region Sales at MultiDyne to learn about how you can use fiber-optic camera adapters to incorporate large image sensor digital cinema cameras into live multi-camera production with remote power and bidirectional signal transmission on a single cable.
Find out how…
- To address different formats and signal requirements with a single adapter
- to choose the right fiber-optic camera adapter for various camera types
- to use the same tools and workflow implemented by state-of-the-art studios and mobile production units to achieve a unique look and feel.
Jim Jachetta:
Good morning everyone. Jim Jachetta here from VidOvation. I’m the co-founder and CTO. Today we have a very special guest. My friend Jesse Foster from MultiDyne, he’s the director of products in Western sales. He’s going to lay some fiber optic knowledge on us, particularly how to use studio cameras in a multi-camera production environment, a live production environment, right, Jesse?
Jesse Foster:
Correct. That’s right.
Jim Jachetta:
All right. So lay some knowledge on us.
Jesse Foster:
Okay. So thanks for joining everybody. So I am located in the Los Angeles area and report back to the MultiDyne headquarters in Hauppauge, Long Island, New York. And a little history on the company. We’ve been around since 1977 and over the years we’ve focused primarily on these markets and with these applications that are on the right hand side of the screen. For the sake of this presentation, we’re going to be focused on these areas of venues and stadiums and live stage events, primarily with the studio signal extension side of things. And touch on what we can do with PTZ and POV operator lease cameras, if that’s a real phrase.
Jesse Foster:
Other areas of interest that you might see here, follow up with myself or Jim afterwards, we can get into what we do in the military space and streaming and so forth. One of the main through lines that you’ll see through this presentation is our leveraging of the SMPTE fiber optic spec in the United States. It’s this SMPTE 304M connector. You see it’s an international standard. It’s an international technology that’s been standardized by multiple bodies, that I have listed here. The connector is then utilized with the SMPTE 311M spec of cable, which you can see on the right hand corner of the screen there. It is comprised of two single mode fibers, which means your bandwidth is theoretically, it’s limitless to some degree.
Jesse Foster:
There’s also two power cables and two low voltage signal wires. So it’s a all in one a connection that if you just imagined once you start seeing the breakout of signal IO that we can do on our products, what that would be traditionally in the copper sense, it would just be a very large loom of copper, that’s very heavy and is very distance limited. So this technology is, it’s streamlining and it enables workflows that are not achievable with copper. And in the cases where we don’t have the SMPTE 311 cable, whether or not it’s an older stadium or a studio facility, we have techniques to get the payload onto a single fiber optic tactical single mode cable, and inject power at the receive side, or at the camera side.
Jesse Foster:
So the message of we can turn any camera into a SMPTE camera, it really is applicable to any type of video source that you might have. It could be a VTR that we could hang on the end of our transmission system. So it’s a throwback VTR server or something like that. So you’ll see that we have a very not platform specific. We’re very universal in our solution. So really you guys will probably find some applications that I don’t mention here, and we’d be interested to talk to you about those as well.
Jim Jachetta:
Well, you were telling me yesterday, Jesse, when we were harassing that, that everything you do from the cable to the technology is very standard spaced.
Jesse Foster:
Correct.
Jim Jachetta:
And you’ll get into that more.
Jesse Foster:
Correct. And that’s how we can be everything to everybody in that sense. Primarily the cameras and the peripheral items in the industry are standards based, whether it’s SMPTE or others. And we make sure that we are compliant with those standards. So we’re plug and play and we don’t introduce any proprietary [inaudible 00:04:42] or anything in this level of product. So to bring the SMPTE studio camera, the message full circle here, this about 20 years I believe since HDTV really was rolled out, that the SMPTE cable spec has been leveraged by the prime camera vendors. And I have that, the ones that we work with listed here. So it’s really the who’s who, the broadcast cameras traditionally here, Sony, Hitachi, Grass Valley, Panasonic and Ikegami.
Jesse Foster:
So we actually have product that augments these systems and like I mentioned gets you to the single mode workflow, and allows you to get away from the hybrid cabling where required. So like I mentioned, if there’s a studio or a stadium, it doesn’t have it, or if there’s issues with the electromagnetic interference where the copper in the cable could induct some unwanted interference. That’s another reason that, do you get away from the hybrid cabling?
Jesse Foster:
So to do that, we have a product line called the SMPTE-HUT. So it’s a hybrid universal transceiver. On the left hand side is the base station unit. This plugs right onto the back of a CCU. So the studio SMPTE systems from Sony for example, are going to be comprised of two elements. You’ll have the camera and you’ll have a base station. The base station supplies the power and does all the transceiver thing of the video signals, the Intercom signals, everything. So you’ll see that we have the equivalent of that system in our SilverBack line, which we’ll get to. But when we’re working with these traditional SMPTE systems, this is the type of product we use to get you to that single mode workflow. So the unit on the right is the camera power insertion unit and the unit on the left is the one that goes onto the base station.
Jim Jachetta:
So Jesse, you would use this tech if you don’t have SMPTE conductivity throughout your facility. So it’s getting, it’s mapping the SMPTE conductivity to more traditional, more generic single mode fiber. Is that correct?
Jesse Foster:
Correct. Yeah. This is a single mode nine micron fiber, which is the same as that’s in the SMPTE cable. But it’s unrestrained, it doesn’t have the copper around it, the jacketing and everything, but you can get them a tactical is the term. So tech 12 would indicate that it’s a tactical fiber that’s meant to be used in the field and can be run over by vehicles and it’s meant to be resilient. And the 12, it would indicate that there’s 12 fibers in it. So there’s tech two, so really the number indicates how many fibers are in a given cable. This would be a tech two scenario that you see here.
Jesse Foster:
The signaling from the camera to the base station and back is bi-directional by nature. So this takes two fibers in this standard application. Another scenario where you’d want to get away from the hybrid fiber is distance limitations that are implied with the hybrid fiber. Nobody really makes really long SMPTE 311 cables. They make them long, but not as long as you do with the single mode fiber because just the weight and the cost and everything, like the reels would be too big, it’s just not feasible.
Jim Jachetta:
Well, yeah. Well, so I believe Jesse too the resistance of the copper cable then inhibits how much power you can push through it.
Jesse Foster:
Correct. That’s right.
Jim Jachetta:
That’s part of it as well, right? Yeah.
Jesse Foster:
Correct.
Jim Jachetta:
[inaudible 00:08:31]. Single mode, you could go 15, 20 kilometers, you’re not going to make a 15 or 20 kilometers SMPTE cable.
Jesse Foster:
Right. Mostly based on just the cost and weight and everything and it’s a waste. Because you’re wasting all of the additional copper connectivity that’s in there. You’re going to utilize the two fibers. You might as well just do something like this with the system like this and go those 20 kilometers and get back to SMPTE at the camera and at the base. So that’s what we’re showing right here. And our SilverBack V data sheet, which we can send a link out to, it’s on our website, has a chart that shows what type of power distance you should be able to expect based on a given type of [inaudible 00:09:11] cable. And it really does come down to the diameter of the copper power conductors that are in there, to your point.
Jesse Foster:
There’s two workflows here. One is active at the camera side. We’re actually injecting power depending on the type of cable you’re using. We can go up to a kilometer to power the cameras. So down from there if you’re using thinner gauge, SMPTE 311 fiber for example. And then the second scenario here is the camera’s powered locally using like four pin XLR or traditional camera power. And we’re doing this with a passive device and it’s really just converting the two STs back into the SMPTE connection to make the connection to the camera.
Jesse Foster:
They’re typically passive like I just referenced, but we can build these products. And a key point to make at the front end of this presentation is, we do a lot of custom work, when the customer, we have high end customers asking us for, to push the envelope in regards to how these things are done. And then we turn those into standard products moving forward. And to simplify that for the customer and us, we’ve become very flexible in the way that products are built. You can get a basic version of something or a very sophisticated version of something and that applies through the open gear cards, through our throw downs, through this type of product to the camera adapters. It’s modular by design and we can stuff it as required for the application.
Jesse Foster:
So in that vein here, we have the ability to put active optics in this system. And once you’re doing that, you’re actually relaunching the optical signal. So at that point you could remap it to different wavelengths or extend, boost that optical signal by regenerating it. And then once you remap the optical signal, you could be outputting in unique CWDM wavelengths up to 18 unique wavelengths that we could then multiplex. Which in this scenario gets you to nine camera chains because there’s two wavelengths per camera chain. I have a slide that will elaborate on that.
Jesse Foster:
Once you do get rid of the hybrid cable and you’re on these standard STs, the single mode fibers, we can then use a product called the fiber saver, which is essentially a wavelength shifting, remapping CWDM Mux system. The one up top, this one RU is 18 wavelengths, but it’s treated as a nine by nine transceiver. So it’s going to do nine SMPTE camera payloads over a single fiber. So it’s a very popular in the rental market for super bowls and other applications where there’s a fixed amount of fiber say they only have a tech 12.
Jesse Foster:
And majority of them are being used for other applications. You use something like this or you could take those devices that are using up all the other fibers and run them through this device and free up additional fibers. And that’s what makes this such a powerful solution. So we have three standard flavors here. We got the 6,000 series, which is a third RU width that shares a similar form factor to some other products, you’ll see, which makes it really easy to deploy and rack. Then you get into the one RU and once you have 12 channels, so that could be unidirectional or bi-directional. You could have six signals going in each direction or all 12 going in one direction. And the same is true on the 18 channel product. You could have the nine by nine or you could have 18 by one, or 18 by 0, which is going to … we could build a really even more granular than that. If you have a requirement to send four signals one way and 14 signals another way we can do that.
Jesse Foster:
So a very flexible product. You’ll see the notes there that there’s re-clocking bypass per path. So we’d be able to handle a non SMPTE signals that, an SDI type of signal, traditional signal like 10 gig ethernet. So that opens up the ability to one gig or 10 gig. You can do Dante networks, you can do SMPTE 2022-2110 or 2022 dash six. Sorry, the old one’s throwing me.
Jesse Foster:
And then you also see that there’s a copper interface as well. So you can come at that with the 12 gigabit, SMPTE 2082 signal or a 424, three gig signal. It’s just auto detects and blast it out over a optics. There’s also auto fail over functionality. So like say, if there is no optical input to be remapped, it’ll default to the copper input. So you can build some pretty sophisticated auto patching [crosstalk 00:14:13]. Yeah.
Jim Jachetta:
Let me ask you a question, Jesse. So we did an installation at Fenway not too long ago and I believe at Fenway, they had four or five cameras in the center outfield, probably catching some shots of the hitter, the pitcher, first base. So they’re kind of above the field. And I guess the old school approach would be for those five cameras, you would run five SMPTE cables to the truck bay. That’s a more costly, more expensive way to do it. You could use one of these fiber savers between the truck bay and the outfield position and then map the generic single mode or map these multiple wavelengths to the SMPTE cables. Just for the last couple of feet, instead of going hundreds of feet SMPTE, you would just go from the patch panel to the camera SMPTE. And then from the patch panel in the truck bay, from the bay to the truck. Am I on the right track with that? Am I understanding it correctly?
Jesse Foster:
Correct. Yeah. If it was already a ST connector, single mode fiber signal, you can plug right into this. The SMPTE system, you’d use our HUT system to modify the cable to this type of interface and then you would leverage this to get all that over a single fiber. But a key point of this system is its wavelength agnostic on the optical input side. So if you had a bunch of 1310 based transceiver type products that are the cheapest lasers to make and fabric [inaudible 00:15:57] low cost, launched at minus 7DB, all that kind of lower cost stuff, you could interface with the optical input of this device and then it’ll convert it to the CWDM wavelengths. So you can really take low cost products and leverage CWDM single fiber transport and then get back to that lower cost.
Jim Jachetta:
Well, we want all our fiber optic to be MultiDyne, but God forbid if a customer has another brand. What you’re saying is as long as it’s a 1310, input output, you could map it through the fiber saver.
Jesse Foster:
Any CWDM wavelength. So 1310 FP is a cheap type of laser, but DFB is what CWDM. So it’s a higher end optic, but it’s wide band. So anything in like the 1270 to 1610 range, anything in there, it’ll detect it, it’ll-
Jim Jachetta:
It sees the whole spectrum from 1310 all the way through the CWDM.
Jesse Foster:
Wavelength agnostic.
Jim Jachetta:
You can shove any single mode signal into any of this savers inputs and it will work.
Jesse Foster:
Yeah. And to that point, this drawing is actually showing leveraging multi-mode on the edge. So multi-mode is notoriously not CWDM friendly because the core size is going to be 50 micron or 62 and a half micron and you’ve got like modal dispersion, chromatic dispersion, it doesn’t work. So you need that tight core for the refraction to be tight enough where those signals don’t get crosstalk and so forth. So multi-mode is always like a nonstarter for doing like high bandwidth signals, like 12G you got to get very short runs in contrast to what you’d get on single mode.
Jesse Foster:
So what this system actually allows you to do is use what I have drawn here is like just a switch that is going to be using like truly COTS, consumer off the shelf 10 gig multi-mode SFPs which are the lowest cost, the solution for when you’re doing a data center or you’re doing a large range of devices and you need optical connections. So that’s what I’m showing here is, is co-mingling one set of fiber savers to allow you to get 10 gig ethernet over that single fiber along with some SMPTE HUT.
Jim Jachetta:
So your optical input will detect or work with a multi-mode connection, I guess assuming it’s … well, multi-mode is always ST. So as long as it’s 1310, multi-mode the saver will see it and handle it just fine.
Jesse Foster:
Yeah, so the input and output stages are decoupled. So it hits the receiver optic and then it gets retransmitted. So there’s an electrical handoff to it, another SFP that is then going to be single mode CWDM facing. And then on the receive side is, again, it’s decoupled. So you could actually go in multi-mode on one side and come out single mode on the other side if you want. But I’m traversing this as multi-mode in, multi-mode out.
Jim Jachetta:
Yeah. You would think, if you’re using a consumer off the shelf or COTS off the shelf IT stuff. Yeah, it’s not uncommon to see IT switches using a multi-mode SFP just for in the rack like you said, Jesse, going from equipment equipment, but then going from building to building. I see it all the time where to save a couple of bucks, they put a multi-mode SFP in there and I’m like, “Why are you boxing yourself in to save a couple of bucks?” But people do it.
Jesse Foster:
Right. And that’s the scenario there. So yeah, primarily in the data center, in that world it’ll be LC connectorized duplex. So this workflow would require just a passive jumper from LC to ST.
Jim Jachetta:
Yeah, I picked a breakout, a conversion, a pigtail. Yeah.
Jesse Foster:
Yes. So let’s see, transitioning onto another one of our platforms that, it’s all complimentary. We have like a [inaudible 00:20:06] type of a mindset with these products where number one, it’s extremely customizable. If the configuration you’re asking for doesn’t exist, we’ll build it for you and then it’ll have a part number that you can then reference the next time you need it. But this VB is versatile bricks essentially shortened to the VB series, build a box. The data sheet is very descriptive about what you can build and what the form factors are.
Jesse Foster:
I have another slide here that actually explain a little further here. So the dog bone is the single VB card enclosure and that’s an eighth rack width wide. And there are this, the bone yard one RU retaining bracket that you can leverage to, get these in a one RU space and positively rack them. The VB-2 is a quarter rack width and that houses two cards. The VB-6 is a third rack width, so you can get three of these in one RU. And the VB-10 houses 10 cards and is a half rack width.
Jesse Foster:
This platform is so customizable and the options are beyond what I could get on really one slide here, but just so you could get a sense of the type of cards and signals we handle. And we could do ST, we can do SC or LC fiber connectors. We could even connectorize this with the SMPTE 304M connector, and I have some drawings that’ll show that. But extremely possible-
Jim Jachetta:
Yeah, but you were telling me, Jesse, to clarify, when you say that they’re flexible and configurable, it’s at the time of manufacturer, you order it with the IOs you want, some of the internal boards and optical components are interchangeable, but at the time of manufacture, right?
Jesse Foster:
Correct. That’s right. And that applies to the camera adapters, to the open gear cards, the price list is broken out, the data sheets are broken out in such a way where you’ll see all the different iterations, as much as that we could capture. There could be some that you’d come up with that we would accommodate them and then add it to the list.
Jim Jachetta:
The customers will invent the new configuration.
Jesse Foster:
Correct. So like the one on the right here is the VB-10 that it’s capable of multiple 12 GSDIs gen lock time code. It’s got three gigabit ethernet LAN extensions integrated and it’s all over a single fiber connection right here. So full transceiver, you tell us what you need and that’s how flexible it is. So it’s again, like Jim referenced there, it’s a build time, build time situation. But we have the customers sending them back to the factory to be retrofit to update to UHD support or whatever, their requirements change, we can accommodate that.
- Sony HDC-P50 Fiber Optic Extender
Jim Jachetta:
So you can rework it, it’s just the customer can’t pop modules in and out in the field. You say, you do an RMA and then you guys can rework it and-
Jesse Foster:
Correct.
Jim Jachetta:
So essentially the product will never go obsolete theoretically.
Jesse Foster:
Right as we develop new boards, like some that are on the [crosstalk 00:23:14].
Jim Jachetta:
Don’t oversell it Jesse, but yeah.
Jesse Foster:
Yeah. That’s a pretty true statement as far as I could see out to the future here. But to that end, we are doing a 10 gig ethernet transceiver card, that’s on the drawing board right now and the HDMI 2.0 or above, as that evolves, we’ll have that support in there. So it is an evolving platform and as the requirements change, we are accommodating them. So further to the flexibility and how easy this can be to integrate, we have some brackets that allow it to go under PTZ cameras like you saw that 10 slot version, that is the 10 slot plate here. So it’s universal by design, it’s meant to go between a tripod or a camera mount, and any type of camera really.
Jesse Foster:
So this is a Panasonic PTZ here, but you’ll see another drawing here shortly that uses the Sony HDC, I think P50 it is. So it’s really flexible, you can mount a studio camera on this thing, it’s highly, it’s a rugged design that, I don’t have a weight spec necessarily, but be able to really put, a good quick aside here, like this variant that you see there, it was actually built for avatar. So about a year ago we engaged with the avatar production crew. They had 16 Sony Venice cameras. We went in with the SilverBack camera adapter.
Jesse Foster:
And these guys are really sharp and they thought it through. They say, this is going to add a lot of leverage, a lot of extra weight to our 3D rig. What else do you have? So this VB-10 came up, we thought through the workflow that they needed and this one box is now facilitating all of the transceiving that that industry leading 3D optic rig or 3D rig needed, it’s all in this one box and these plates make it highly integrated to their 3D rig. So that’s where that particular flavor of VB-10 came from. But again, whatever you need.
Jesse Foster:
Here’s an example of, one of our customers uses this in the rental facilities to get high frame rate, slow motion production for the NBA. So that’s the Sony HDCP 50 camera and this is essentially to scale. So the gen lock requirement is fulfilled easily with the gen lock card being populated in the VB box. You then have tally, no problem. We also have serial comms available in there. The gigabit ethernet for camera control. And in this scenario, this box is outfitted with two 12 GSDI transports. The P50 can have two 12Gs and four 3Gs.
Jesse Foster:
So if you were to use four 3Gs to get slow mo going, where you’re going to get additional SDIs off the camera body to accommodate those additional frames. We could populate, you see there’s room here on this VB-6 to add additional 3G or 12G-SDI paths. So very powerful-
Jim Jachetta:
So a new camera comes out with a higher frame rate and it’s got more 12 GIOs to get those extra frames out. Here’s an example of, you would customize the rig for this new camera that comes out.
Jesse Foster:
Correct. So the customer I was referring to had identified that this platform can scale per the requirements. So if they don’t need to get a six path product and get away with two, they would order it accordingly. But additional frames equate to additional SDIs but also additional resolution is another reason why the SDIs keep creeping up. So if you put four 12Gs in here, which you can, no problem, then you have a future protected 8K camera platform transport. And it’s good to leverage four 12Gs to get that, or quadrants across. The same platform can facilitate high frame rate workflows or 4K or 8K.
Jim Jachetta:
Pretty cool.
Jesse Foster:
So transitioning into another, one of our key product lines is openGear. We’re one of the early openGear partners and have always looked at that as an important part of our toolkit, to allow a customer to build the ideal system. So we can commingle open gear with the VB. I have some drawings that’ll show that. So I can go openGear openGear or openGear VB, VB to openGear. So within openGear, we do fiber optics obviously. We have some OTT codecs, multiviewer technology, intercom conversion that you’ll see some slides on distribution amps which are always needed, routers, some conversion and audio embedders, de-embedders.
Jesse Foster:
And this line is always growing. So this is a key focus of ours. Just to identify some key features of the frame that make it industry standard. It’s as cutting edge as any frame out there in the sense that there’s gigabit ethernet to every slot. It also has CAN bus connectivity to every slot. So that was the original standard that was used for command and control over the cards. But gigabit ethernet was added to a previous version of this frame. And that allows for configuration and monitoring and also some streaming capabilities and all that. So it’s true ethernet, layer two at this point to each slot. Dual power supplies and reference inputs. So you can have reference fail-over, you could have multiple reference planes, you could have by level and try level simultaneously.
Jesse Foster:
And then the redundant power, everything is front loading, hot swappable, the card modules and the power supplies and the reference card for that matter. So if you had a reference card failure, you could swap that out without forklifting out the frame, which is a nice feature as well. So on the Intercom front, what we do is convert party line to wire RTS or Clear-Com to four wire for use in matrixes or optical extension, because party line is usually going to be wet. It’s going to have power on it to power belt packs. So what we’ll do is remove the power convert it to four wire, do the transport, take it back to the party line with power. And we can power up the six belt packs in some cases, depending on the draw of the particular belt pack.
Jesse Foster:
So what that allows you to do is then daisy chain the belt packs for production. You have party line, traditional Intercom, but we’re actually bridging the single fiber layer of transport and getting your Intercom handled nice and easy over that, that transport layer is what we’re doing with these cards.
Jim Jachetta:
Lack of comms can be a real Achilles heel in any kind of a platform. So I think this is a very important capability that you guys are doing. And a lot of people claim they do comms and it drops out in the hiccups, it doesn’t quite work. This is rock solid. You can’t tell the photog where to point the camera if you can’t talk to him.
Jesse Foster:
Absolutely. Yeah.
Jim Jachetta:
I’ve seen other systems out there fail and they’re trying to call the photog on his personal cell phone and then they wonder why they can’t reach the guy to give them camera instruction. So this is really important.
Jesse Foster:
Yeah, because his phone’s off, because he’s a pro.
Jim Jachetta:
Yeah. Or you ran out of battery.
Jesse Foster:
So yeah, definitely really does open up a lot of different workflow options for our customers. And that comms 300 package is the same physical enclosure as the VB. So there’s the Lego scenario I was referring to here. So this is actually a productized collection of our technology, to allow low cost, easy to deploy field production, for ENGEFP use for example, where the camera side has the video transceiver payload. It’s got genlock for the camera. It’s got the serial comms for older camera platforms, got gigabit ethernet for newer cameras for connection of control.
Jesse Foster:
But also the reporter could plug the laptop in, so it’s a proper gigabit ethernet layer. On the way back to the truck, you have two channel mic or line level with phantom power. And the red patch here on this box here, that’s the comms 300. All of the wiring to the optical transport is internal, makes this really easy to deploy if you got the comms 300 in a standalone sense, it has a DB 25 industry standard pinout, so it’s easy to use in that regard too. But this is very clean package where we have it all tied together with all the wiring done internally. Again, you can power the bell packs on both sides.
Jim Jachetta:
So is this modular, these like when you say Legos, do they, they both together that that’s how they communicate and that’s how they share power?
Jesse Foster:
Correct. Yeah. So all of the, what would be external if you bought these as independent items.
Jim Jachetta:
Separate. Yeah.
Jesse Foster:
This is all shared in this model, but again, they can get very granular just by getting each one of these separate, like the video and data and all that, that’s a particular VB. And then the comms 300 is the Intercom device. But this is a nice and clean and it really meets a requirement of the front lines of the world, the guys that are doing the ENG truck builds, this meets their requirements here. We can also add additional functionality, because it is the Lego kit that is the VB essentially.
Jim Jachetta:
Okay.
Jesse Foster:
So we do have a brackets to, basically like a blank retaining bracket here. So this is rack mountable as well. So on the truck side or in a fly pack, it’s ruggedized. So getting into the story where we can commingle the platforms, up top here we have VB to VB. And here’s where you see that we can essentially turn any type of edge devices into a SMPTE studio camera because we’re leveraging the 311 and 304 spec here to do power insertion using our Juice box. So we have a Juice 48 volt and a Juice 60 volt, depending on the draw requirements, where you go single mode from your say bay side, you hit the Juice box kilometers away or a couple of hundred feet away, whatever the application is. And then we do the power insertion.
Jesse Foster:
And you’ll see here on the back of this chassis, there’s this empty 304 connector. So we can power this box over that hybrid cable. And then we could actually power peripheral items like the PTZ camera using POE plus plus or a standard pigtail 12 volt. We’re very flexible in this architecture. So not to go down too many rabbit holes there, because of this big story. But in the simple sense, the drawing is pretty self explanatory here where these are 12Gs facing two in each direction, but these could be built with four 12Gs in one direction to accommodate an 8K workflow like I’d referenced earlier. So all this stuff is future looking when it comes to high frame rate or 4K or 8K.
Jesse Foster:
So down below here, here’s where you have the Intercom, the OG comms working with our 4,600 series of openGear cards. The 3,600 series is to quad 3G and the 4,600 is up to quad 12G, and then you get all the compliment of signals. And again, this is scalable. If you don’t need all these signals, we have versions that are just the video with ethernet for example. The data sheet clearly calls that out. But you see that we have the full studio signal extension system in multiple form factors.
Jesse Foster:
Now getting into our, one of our flagship product lines is the SilverBack. This is the SilverBack V. Here it is on a Sony Venice. Here’s a connector panel side, here the 4K cam. The first input is 12G capable. It’s also 6G, 3G capable. The second input is 6G, 3G capable, and then three and four are 3G. So that allows you to do quad link cameras, dual 6G cameras or 12G cameras like the Venice here. Additional connectivity, we have data breakouts for serial, GPIO tally. We have a four by four line level audio transport over this connector here. There’s breakouts and adapters for all this available.
Jesse Foster:
We also include, I believe it’s six HDBNC to full size BNC adapter cables with this, just to ease the adoption of HDBNC moving forward. Your timecode output, your reference output to essentially get that studio live multicam workflow going where you’re genlocked and everybody’s on the same time code base. We have a dual Intercom capability here. This is a mini XLR five pin. We provide a five pin to, meaning to a full-size XLR adapter. And then these are just standard line level or mic level audio inputs that are mini XLR. We provide those adapters as well. Here’s your gigabit ethernet extension there.
Jesse Foster:
We have a transceiver 3G SDI in this base model here. This is actually the flagship configuration, the highest level build. We do have more capacity that’s not leveraged yet, that we could actually bring out additional SDI transceiver capability. So very flexible again, it’s hard to kind of put a [inaudible 00:38:05] around it.
Jim Jachetta:
What doesn’t it do Jessie? Maybe just-
Jesse Foster:
Right, exactly. So on the dual base side, that’s a two RU form factor, but it handles two camera chains in one and there’s cost savings there for the end user. And there’s also a single camera version available as well where this second panel is just not populated. But you see, you get all of your quick reference status indications on the front panel for power, for signal presence. There’s also a panel here to control and monitor locally. And there’s also a web UI capability here to do some more granular settings and monitoring. On the backend of it, here we have the redundant power, just simply three or four connectors. It was all the mirrored connectivity that was on the camera side. There’s your control for the web UI and there’s four integrated openGear slots.
Jesse Foster:
So this footprint is extremely flexible in that sense. You can do multiviewers, you can do audio embedding, de-embedding metadata manipulation. You could do 3D led processing because there’s so many solutions in openGear and it keeps growing that that is just truly an opportunistic location to leverage-
Fiber Camera Backs for Cinema Cameras – ARRI AMIRA, SONY VENICE, CANON C500/7000, Panasonic VARICAM LT
Jim Jachetta:
Or say you’re shooting a game in LA and you want to send some ISO camera feeds back to New York VIP. You could put some encoders in there-
Jesse Foster:
Excellent point.
Jim Jachetta:
… would be, or IP monitoring in the venue or in the compound, you could shove them in there.
Jesse Foster:
Yeah. That’s a key product line I didn’t touch on there that you’re absolutely right is to bridge the studio, high bit rate uncompressed world with the streaming world all in one footprint, it’s on point.
Jim Jachetta:
Right.
Jesse Foster:
Absolutely. Touching again on the fact that this is a very flexible, scalable platform and it’s modular card based on both sides. We have versions that lend themselves to different market segment. So image magnification, IMAG is a popular workflow for house of worship, for corporate, because the human spectator wants to see the lips move ideally in sync. So latency is key, we don’t really add any additional latency with this system. So having the ability to use a lower cost camera and have just the connectivity that you need to lend itself to that workflow. So just video to the base station ethernet and maybe some Intercom, all the other stuff don’t populate it at build time. There’s cost savings for the customer and there’s a single base station version that you see.
Jesse Foster:
We cover a lot of different market segments with this one platform based on its scalability. So now moving into what I’m finding exciting these days is these manufacturers all have new platforms to some degree the AMIRA has been around for a while, but it’s unique in the industry in regards to the look it can deliver. It’s dominated digital cinema production. It’s the platform, Sony is trying to give them a run for the money with the Venice, which they will in some regards. Canon C500/700 also has a foothold. Panasonic VariCam LT is being the lowest cost solution of all of these. And that’s very popular for that reason and it makes great images. It has a great feature set as well.
Jesse Foster:
What’s going on is, there’s a push in the industry to make more compelling content, whether it’s a concert or a live studio, a stage production. They want to look like a baseball game. They don’t want to use it two thirds imager camera system, like the traditional SMPTE camera systems or ENG type camcorders. They want to use large 35 millimeter or larger imagers that some of these cameras are offering. So what that means is they need to then take that big front end, that great imager that might not be as feature rich in regards to connectivity that they need, and then they use our system to bring that additional transport layer in, to make these cinema cameras into studio cameras, that’s the message.
Jim Jachetta:
Well, yeah. These cameras were not really meant for live. You shot, you might be shooting a movie single camera and you were recording the raw video in the camera, then giving the the video files to your editor and they would put piece together a produce show or cinema. So they really weren’t designed for a live broadcast workflow and MultiDyne now gives you that ability.
Jesse Foster:
Correct. Because what they’ve done for themselves to get into that market segment of multicam live production using cinema cameras is, the AMIRA for example has a timecode input. It has a genlock reference capability, and then it has a return video to send back for the operator to see on the viewfinder what the program switcher is set to, like a traditional broadcast workflow. So the Sony natively, I mean that’s their world, that the Venus supports all of that. And so is the Canon Canada and the Panasonic. So to your point though, absolutely record on camera for high frame rate, for doing things in post later. But like we were mentioning, getting that streaming capability live real time, you also get usable signals that are referenced together that you could put into production switcher and cut a live show simultaneously while you’re getting say a raw-
Jim Jachetta:
Yeah, so it’s like a concert, you get broadcasted live and then you could cut together even higher quality cinematic experience to supplement the live show later on. Or a lot of our innovations customers are doing live reality TV and then they will do, take the recorded footage from the camera for a rerun shower, that kind of thing. So they get to reuse the content for multiple purposes live and then also are produced content later on.
Jesse Foster:
Absolutely. So like a big event that area was involved with this last year was Coachella on the outdoor stage where they used, I believe it’s like five AMIRAs and two Alexa minis. And each camera had its own director of photography and a first AC camera assistant. So they were getting unique looks in each camera. You could add a 3D light and make everybody look the same. But the idea here was they wanted each DP to have their own look for each camera position, but simultaneously they were going to a … I believe it’s an NDP truck and cutting and streaming to YouTube at the same time based off of the additional connectivity that the camera transmission system like the SilverBack gives them.
Jim Jachetta:
So prior to that they would have just done a produced documentary about Coachella and not had the live streaming aspect. What we find with a lot of our customers too, because advertising revenue, revenues in general, whether it’s advertising, whatever your revenue model is, it’s shrinking. And to be able to utilize your content in more than one way is very, very important. So these people producing the Coachella content, I’m sure they generated revenue off of the live show and then they’re probably selling recordings of the produce show afterwards as well. So they have two revenue streams instead of just one.
Jesse Foster:
Right. With minimal intervention, with minimal operations needed happening simultaneously, so-
Jim Jachetta:
Right, doing them both with the same personnel and doing it at the same time with the same equipment. And MultiDyne is facilitating that ability to do all that.
Jesse Foster:
Absolutely. I have a couple slides here just to show some standard workflows. The Panasonic VariCam LT can give you UHD HDR capability off the camera head, but the signal is going to be a raw 3G data signal over 3G SDI, dual 3G SDIs, that needs to be debarred and interpolated and turned into a standard SMPTE color space signal, which is done in this workflow using an Atomos Shogun Studio II. So that monitor will actually convert debayer and give you 12G SDI standard usable signals to then record, run around over fiber or do whatever you need for monitoring.
Jim Jachetta:
So the DP can use the Atomos monitor for his video assist and then it’s serving a secondary function of debayering the raw video as well. And that integrates into this whole workflow.
Jesse Foster:
Correct. And you’ll see that this version of the SilverBack V is optimized for the Panasonic VariCam LT where it doesn’t have the additional cost of 12G or 6G transport. It’s just giving you the dual 3Gs that you need to get that raw signal off the camera body. And then it’s given you two 3Gs back. I think in some cases they use SDI for genlock or you can use analog bilevel or trilevel, like we’re showing here. So that’s why there’s two going in each direction there.
Jesse Foster:
Here’s the dual camera base station. This scenario, we’re using single mode to the 311 at the camera. We have two different scenarios here. This is connectorized ST right off the base, you go ST to ST to the Juice box, and then you do your 311 to the camera. But we can also keep this camera, the base station connectorize with the SMPTE 304 connectors. And then that could just go SMPTE 311 hybrid fiber to the adapter directly and get the power and all the beauty that the SilverBack-
Jim Jachetta:
Well, then you wouldn’t need the two Juice boxes in the middle there if you ran, if you did simply the whole run.
Jesse Foster:
Correct. So as a rental house, you would say, okay, this is a standard studio, they had 311 in the walls or we have reels we’re going to use. You just connect directly, if the distance is all workout. If it’s longer or you need that single mode functionality, we have a same form factor as the HUT base stations, that little box that’ll convert the 311, the 304, the LEMO connector to ST right at the base. So you could have the same base station service, two different rental customers, one that needs the ST long haul. You just use that little pen box. So it’s very elegant way of-
Jim Jachetta:
You can correct me if I’m wrong, Jesse, but sometimes in sports, I know ESPN, they’ll shoot some footage and they want more of a documentary feel instead of your typical live broadcasts like you mentioned. They want more of the cinematic frame rate. They don’t want to shoot at 59.94. They want to shoot at, what will they shoot at, 24. They want more of-
Jesse Foster:
23.98.
Jim Jachetta:
23.98. Thank you Jesse. That’s one of the things where they want to use the bigger sensor to get a better, a richer look and feel, high dynamic range, et cetera. But so if you’re just using golf as an example, it’d be too far to run a SMPTE fiber from the ninth hole all the way to the truck in the parking lot that then is yet another two miles down the street, that you put this Juice at the midway point where you have power close to the camera, or if not at the camera tower, that there’s Juice boxes underneath the camera tower on the camera at that ninth hole. Would that-
Jesse Foster:
Absolutely.
Jim Jachetta:
… that would be the scenario.
Jesse Foster:
And to your point, like not wanting your content to look like a baseball game, not to … they look amazing, but that’s a TV camera. So like, and-
Jim Jachetta:
Well, I hope no one from MLB is listening right now.
Jesse Foster:
Oh, yeah. But they’re embracing-
Jim Jachetta:
Let’s get them to use cinematic cameras.
Jesse Foster:
Well, [inaudible 00:51:03] films uses a slew of AMIRAs, for example, to get there.
Jim Jachetta:
Yeah, there’s your example. That’s more documentarian than a live event. You want a different look. So wouldn’t it be nice though if you could shoot the NFL game and then use the same footage for NFL films, do a similar purpose, shoot it once and cover both needs.
Jesse Foster:
That’s happening because the consumer is king, their monitors at home can do high frame rate, they can do HDR.
Jim Jachetta:
Right.
Jesse Foster:
They’re going to need to deliver that level of content sooner than later. So it really is the marketing departments of the different teams that are embracing the AMIRA and the Venice. And then that’ll start to make its way into the broadcast workflow using products like this. So I have a slide that touches on the ASC awards that we did in conjunction with area mobile TV group last month. And that is a scenario where they wanted the content to look as good as possible for the cinematographers. But they also needed that TV workflow. So we used our converter products to get down to the truck, which was beyond the range of SMPTE hybrid connection.
Jesse Foster:
So just moving forward here, here’s an example again where you’ve got to direct STs going here. This is a dual UHD configuration. So this is like a Canon C700 or an ARRI AMIRA or Sony Venice, leveraging the top build that we do in the SilverBack here to do your 12G, duals 6G in the case of the ARRI, or quad 3G in the case of the Canon.
Jim Jachetta:
I apologize folks, Jesse and I could talk about this for three hours. We’re approaching an hour, but this is good stuff. We’re going to let Jesse keep going. I’ll try to refrain from too many questions, Jesse.
Jesse Foster:
Yeah, I know, we’re almost through here.
Jim Jachetta:
Oh, perfect.
Jesse Foster:
I didn’t want to dwell on this because it’s kind of just-
Jim Jachetta:
Just a different configuration of a similar application. Yeah.
Jesse Foster:
Absolutely. So here was the ASC awards I was just referring to. So this was down at the Hollywood and Highland. I was telling Jim that you might remember, recognize this carpet from the Ray Dolby Ballroom, which is where this took place.
Jim Jachetta:
Yeah, well, Jessie and I were both in [inaudible 00:53:27], so we go to a lot of the SMPTE meetings in Hollywood and that’s the … I think there’s hotels in New York that have that same carpet Jesse. So that could be a myriad of different ballrooms but [crosstalk 00:53:41], it’s home.
Jesse Foster:
So to show you what we did in particular here, we had three camera positions that were using AMIRAs that used the SilverBack V, the SMPTE fiber made all the connections easy up on these hard to get to locations. They came down to these Juice boxes here, which then converted it to ST fiber, which got down to the truck that was on Hollywood Boulevard via just the tactical.
Jim Jachetta:
Yeah, because the hotel is not a broadcast facility, they probably had some fiber in the wall, but the fiber in the wall was single-mode ST. right. And so the MultiDyne gear, the SilverBack V, et cetera, the Juice box, et cetera, are bridging the more generic infrastructure to the more broadcasts infrastructure.
Jesse Foster:
Absolutely. This is a key application of everything we’ve been talking about, like doing an award show with cinema cameras, with real time live switching. So this was done at 23.98, so it has the film frame rate look, and the mobile TV group production switcher and everything was set up accordingly and they cut a live show using digital cinema technology. So it’s pretty cool. So it’s two more slides, some new products. We had an acquisition, so this is a shameless plug on some of our latest offerings here.
Jim Jachetta:
This is important stuff.
Jesse Foster:
Yeah. So just wanted to give a tip of the hat to an acquisition we made last year, late last year of census digital out of Toronto, Canada. And with that we got two different product ranges here. The NanoBriX is an evolving line of Throwdown products. We do very cost effective dual 3G or 12G transport. We also have some HDMI capability or SDI to HDMI with down mix, which is kind of a nice value add to where you can monitor surround sound broadcast signals at PCM audio. It’ll down mix it to a proper stereo so you could hear it on a consumer monitor. You won’t lose your center dialogue for example, it’ll be mixed into the left and right speakers. So a big value add for low cost, the high end signal monitoring.
Jesse Foster:
The NanoBriX 12G DA is a low cost, easy to deploy, 12G SDI distribution at flier. We also have it for a MADI signals. So 64 PCM audio signals, time division multiplex in a single MADI [inaudible 00:56:26] connection, give you multiple copies of that. And then we also have better de-embedders for AES audio. And coming at NAB, we’ll have a MADI de-embedder that’s MADI in up to eight analog audio de-embedded. So it’s dark as well as a MADI de-embedder. You could cascade multiple units and get access to all 64 channels in the analog domain using that product.
Jesse Foster:
And then the other line that we’re proud to have here is our Rackmount audio monitors. I have this, there’s another company out there that people think about as the Kleenex of audio monitors, but I’m like, we’re like, we’re facial tissue and we’re very good [inaudible 00:57:15], I don’t know.
Jim Jachetta:
You’re like the Kleenex with the lotion in it.
Jesse Foster:
Yeah, we’re better.
Jim Jachetta:
It’s just a little better, it’s a little better.
Jesse Foster:
[inaudible 00:57:22].
Jim Jachetta:
Or it’s much better, you don’t get a red nose.
Jesse Foster:
That’s like a bad joke. I think it does apply because these are just as good in a lot of cases and they’re popular products. So Fox network head in here in LA has multiple of these units. They’ve just picked up some additional video integrated monitors here. So yeah, an integrated video and audio monitoring product. This 16 channel embedded version here also has a Dolby decoder functionality available. So you can do Dolby E or AC3 or EAC3 decoding and see the content and hear it.
Jesse Foster:
Here’s a MADI audio monitor, shows you all 64 channels simultaneously and you could pick the payload you want to hear on the monitor here. They all have the ability to bypass the local speakers and allow you to use the volume control to drive a variable output to amplifiers. So you could hear active studio monitors or to an amplifier system so you can have the louder audio right by your ears. So it’s all very flexible for different applications for trucks and loud machine rooms and so forth.
Jesse Foster:
I think that, that’s what I had prepared today and tried to keep it on the particular area of our product line. But we do a lot more like the streaming and so forth. So if anybody’s interested as Jim and Jim will come to me if he has any questions, but at this point in time [inaudible 00:58:51].
Jim Jachetta:
Yeah, I think I see a few questions. Is there, VidOvation, we haven’t really made our announcements yet of what’s going to be at NAB. You mentioned some new MADI de-embedder is coming. Is there anything else you can talk about or we’ll just tell people to stay tuned to both VidOvation and MultiDyne to see what’s coming for the NAB show?
Jesse Foster:
Yeah, I should be more prepared to answer that because I have a list and I was working on that. But we have like an evolution of our Intercom products called the fiber comms, which is an all in one product that will do the two wire to four wire and optical transport in a single device with integrated power. So that’s a popular product that we’ve been asked to do. The audio monitors will have a 12G upgrade made to them, so we’ll have the ability to monitor 12G embedded audio. Some other … that’s what’s coming to mind right now, so I kind of-
Jim Jachetta:
Yeah. Also, I know Frank might, Jesse, don’t say too much, we don’t know yet, or Bob. We’re kind of the same way, VidOvation, we obviously do a lot with fiber. I’m sure all you guys know, Frank Jachetta, a lot of people get us mixed up, we’re brothers. I am actually a few years older than Frank and our dad started MultiDyne. My midlife crisis 10 years ago was starting VidOvation out in California. We do a lot with IP, we do a lot with fiber, a lot with wireless. We’re hoping to, we should be picking up a line of IP diagnostic, IP monitoring. We’re hoping to show that at NAB.
Jim Jachetta:
I do have a very good question, but while we’re on the subject of NAB, your booth Jesse is Charlie 5013, C5013. And the VidOvation booth is Charlie 6945. Your booth will be a little bigger than ours, but we’re in the same hall. We should have some MultiDyne product in our booth, but you can come to VidOvation and we can set up meetings with Frank, Matt or Jesse or Bob in the MultiDyne booth. Of course any of the VidOvation solutions, please come to our stand. So, Jesse, Daniel asks, is there any latency between the SilverBack V when you’re … so between the camera head and your SilverBack V, is there any latency, either shooting it at 23.98 or shooting at 29.97, or 59.94? So is there latency and does it matter the frame rate that you’re using?
Jesse Foster:
No. Like standardly, so we have two different scenarios where there is one path that actually has some processing in it to allow you to do four wire to dual six or to 12G. We can convert the SDI in the transport in some cases. So there is some low amount of latency there, I’m not … it’s all sub frame regardless, so I don’t have that memorized though, I can get you a better answer, but that’s [crosstalk 01:02:34].
Jim Jachetta:
Well, right, so if in the platform you say you’re taking 12G, if you home run the 12G serially to the base station, the latency should just be the speed of light basically. But if you have to take the 12G apart and DeMux it or Mux it into four 3Gs, obviously there’s going to be some processing to do that conversion, is that what you’re driving at?
Jesse Foster:
That’s what I’m saying.
Jim Jachetta:
So for the most part there’s no latency, but there might be some exceptions. Is that kind of the answer.
Jesse Foster:
That’s correct. If there’s processing to do the conversion, it’s going to be sub frame. I believe it’s sub half a frame even, but in the other scenario is just E to O, O to E which is, I think it’s microseconds honestly, it’s low.
Jim Jachetta:
Okay. Well, and then also if … I’m not a production expert, but my understanding that if one camera just due to the technology is a little behind the other cameras, you use the weakest link as your reference and then you advance all the other cameras to match the one that’s lagging or vice versa, that you can do some trickery with offsetting the genlock, the certain cameras to get them all to sync up in your truck. If there’s something odd with some of your cameras or your setup, you just have to use frame shakers to get everything in sync in the truck. Obviously not, if you have a truck that’s doing 20 cameras, there may not be 20 frame syncs in the truck. So that there’s always bottlenecks.
Jim Jachetta:
But, so Daniel, I’ll make note of your question, if that doesn’t quite answer your question, I will put you in touch with Jesse and you can dive a little deeper on the question. Anybody else have any questions or comments? Let me see. It looks like we’re all good. So, wow, I thought we were going to go 90 minutes, Jesse. We ran a little over an hour, that’s good for me. Maybe I need a partner, Jesse, to kind of, we keep each other on track.
Jesse Foster:
Yeah, who’s the straight man, I don’t know.
Jim Jachetta:
Yeah, I can tell stories for three hours. Well, thank you so much Jesse. Thank you for sharing everything today. Say hello to Frank. Say hello to Bob. Say hello to Matt. Say hello to everyone over there, my old family and friends. I see my brother more at industry events than at my mom’s kitchen table, it’s the way it is.
Jesse Foster:
[crosstalk 01:05:22], but what are you going to do?
Jim Jachetta:
Yeah, who’s not as good, but yeah. All right, well thanks everybody. Thanks for tuning in and look for a recording of this event in our blog. And, Daniel, I’ll put you guys in touch with Jesse for those deeper questions. Thanks so much and have a great day. Thank you for joining. Thanks, Jesse.
Jesse Foster:
Thanks for inviting me.
Jim Jachetta:
Bye.
Podcast: Play in new window | Download (Duration: 1:06:01 — 120.9MB) | Embed
Subscribe: Google Podcasts | Email | RSS
Podcast (video): Play in new window | Download (Duration: 1:06:01 — 212.6MB) | Embed
Subscribe: Google Podcasts | Email | RSS