Select Page

Advanced Image Robotics Workshop by VidOvation [Recording]

Last updated Feb 26, 2024 | Published on Jan 23, 2024 | Featured, Webinars, Past Events

Advanced Image Robotics Workshop Replay

Workshop Preview Video

AIR’s technology solves a number of critical, long-standing problems in the video production workflow in a way that no one else is doing.

Ted Schilowitz

Futurist, Paramount Pictures

Advanced Image Robotics solves pains experienced over 30 years of live video production. Content creators and producers shouldn’t have to be IT experts, and it shouldn’t take the GDP of a small country to broadcast an event.

Our executive team has worked with some of the biggest brands in entertainment and tech, won multiple Emmy® Awards, and driven some of the world’s biggest product launches. Over the last few years, technical advances have given us the pieces to radically simplify this industry, and we’ve combined our collective expertise into a purpose-built platform. We can’t wait to see what content you’ll create with it!

OTT Livestreaming is growing from $30 Billion to $534 Billion by 2030. Advanced Image Robotics gives you the tools to scale your content production with that growth cost-effectively.

In this workshop replay you will learn about:

  • Cinematic cameras for a live broadcast workflow.
  • Organic and smooth movement with a robot.
  • Camera operators can work from home.

Watch the Advanced Image Robotics Workshop by VidOvation Recording

 

AIR One Workshop Presentation – Click to Download

AIR One Seminar – VidOvation – Presentation Download

 

 

HOW AIR SIMPLIFIES VIDEO PRODUCTION

  • Robotic controls and machine precision give you greater accuracy in shooting.
  • Camera equipment sets up and breaks down in minutes.
  • Direct cloud connection streamlines the camera-to-viewer workflow.
  • Enabling distributed teams through cloud production means a more efficient labor force.

DATA POINTS

  • AIR One™ estimated production cost savings of >50% (mileage may vary).
  • Investing in more engaging content typically sees a 24% uplift in subscriber eyeballs.
  • Beijing Olympics had a record low TV viewership (8.7M), and a record high streaming viewership (12x higher at 106M)!
  • In 2022, PGA expects to broadcast ~650 hours of golf, and expects to livestream over 4300 hours. But this only represents 35% of what they want to do!
  • The average video editor loses $7000 in productivity per year simply managing and searching for media.

Workshop Summary

(0:00 – 10:06): Introduction and Overview

Jim Jachetta, co-founder and CTO of VidOvation, introduces Kevin McClave, CEO, and Nick Norquist, chief product officer, from Advanced Image Robotics. McClave clarifies that although cameras are involved in their work, they are not a camera company but a solution, cloud computing, and software company addressing a hardware problem. They integrate digital cinema cameras into smart gimbals for enhanced functionality, particularly in live event shooting. The system includes an intelligent gimbal, a cloud platform, and an iOS app for camera control. They emphasize simplifying and automating setup and operation, making it accessible for high-quality live event production without extensive technical expertise.

McClave responds to a question about IT requirements, emphasizing their focus on making the setup process plug-and-play rather than requiring an IT team. Norquist elaborates on their approach, likening their transformation of digital cinema cameras to IoT devices to streamline setup and operation, leveraging IP protocols and automation. They highlight the challenge of integrating different devices in traditional setups and emphasize their system’s simplicity and remote support capabilities.

(10:06 – 10:07): Transition to Tech Demo

The speakers transition to the technical demonstration segment of the discussion.

(10:07 – 14:17): Tech Demo and Control Interface

Norquist begins a screen share to demonstrate the user interface (UI) running on an iPad for controlling the robot. He showcases the intuitive controls for moving the camera and setting preset shots, emphasizing the simplicity compared to traditional PTZ joystick controls. They discuss the potential resistance from experienced joystick operators and mention plans for integrating other hardware controllers, including Skaarhoj and PlayStation controllers.

A brief technical issue is resolved, and Norquist provides a live demonstration of camera movement and control using the touchscreen interface. He showcases the ease of operation and the integration of zoom controls. The demonstration illustrates the intuitive nature of their control system, emphasizing its suitability for a wide range of users.

 (14:17 – 22:24): Control Interface Features and Awards

The discussion continues with McClave explaining the functionality of the control interface, highlighting the integration with Skaarhoj for joystick control options and the automation of various camera settings triggered by double taps on the interface. They showcase the ease of setting up recall positions and the dynamic speed adjustments during camera movements. Norquist explains the patented tele-damping feature, which adjusts movement speed based on focal length to ensure smooth operation, especially during live events. They discuss the importance of replicating human-operated camera movements in video production for a more natural and professional appearance.

The conversation shifts to the awards won by Advanced Image Robotics, including Technology Innovation and Innovation awards, highlighting their unique approach and industry recognition. McClave and Norquist reflect on the challenges they’ve addressed in camera automation and emphasize the importance of their patented technology in streamlining operations and enhancing production quality. They discuss the integration of advanced features like dynamic speed adjustments, which were previously unavailable in traditional PTZ systems.

(22:24 – 22:24): Reflections on Challenges and Patents

The speakers reflect on the complexity of their patented technology and the challenges they’ve overcome in integrating advanced features into their system, acknowledging both the difficulty and significance of their achievements.

 (22:25 – 29:08): Introduction to AirCloud and Setup Automation

The discussion transitions to the AirCloud platform, which serves as an orchestration layer for production setups. Norquist demonstrates the platform’s capabilities, including project management, crew coordination, and gear deployment using drag-and-drop functionality. He explains how the platform automates the setup process by pre-configuring camera inputs and destination IP addresses, streamlining operations for camera operators. Norquist illustrates the simplicity of launching cameras on the day of the shoot using the AirCloud platform, showcasing how settings are automatically populated for camera operators, enhancing efficiency and reducing setup time.

(29:09 – 36:31): Integration with Communication Systems and AirCloud Features

The discussion continues with integrating communication systems into the AirCloud platform, allowing directors to communicate with crew members seamlessly. Norquist elaborates on the versatility of AirCloud, explaining its compatibility with various communication applications beyond the three mentioned earlier. He demonstrates the platform’s ability to provide real-time data on camera status and streamline the color-matching process for multiple cameras during production setups. Norquist emphasizes the platform’s flexibility and remote accessibility, enabling users to control multiple robots and monitor production activities from anywhere with an internet connection.

Norquist also discusses the simplicity of setting up static IPs and the platform’s ability to adapt to restrictive network security policies by creating VPN connections between robots and control devices. He highlights the user-friendly nature of the system, allowing individuals without extensive IT backgrounds to manage complex setups efficiently. The conversation touches on the integration of both robotic and traditional cameras in productions, showcasing the platform’s adaptability to various workflows and the option to bridge cloud-based and physical setups seamlessly.

(36:31 – 42:04): Mixing Robotic and Traditional Camera Setups

McClave provides examples of recent shoots where robotic cameras were integrated into traditional setups, demonstrating the flexibility of their system. He explains the process of using SDI outputs from robotic cameras to connect to conventional production trucks, allowing for a hybrid workflow that combines the benefits of robotic PTZ cameras with the reliability of conventional setups. McClave discusses the possibility of incorporating non-robotic cameras into the AirCloud planning tool, highlighting the platform’s compatibility with SRT streams and its ability to accommodate various camera setups for diverse production needs.

Top of Form

(42:06 – 54:48): Bridging Robotic and Traditional Camera Setups

The conversation delves into the technical aspects of integrating robotic and traditional camera setups. Norquist explains the various options for routing camera feeds, including direct connections to trucks via baseband and SRT transport protocol for cloud-based switching. He highlights the flexibility of AirCloud in accommodating different camera setups, whether integrating other cameras via high vision encoders or interfacing with existing infrastructure like Remy network operation centers. The discussion also touches on the availability of an open API, allowing for seamless integration with pre-existing ecosystems and addressing the needs of clients with specific requirements.

McClave provides examples of recent productions, such as UFC events and the NBA Summer League, where robotic cameras were strategically deployed to capture unique perspectives without obstructing views or compromising safety. He emphasizes the flexibility of robotic camera systems in enabling productions to place cameras in unconventional locations, enhancing the quality of coverage and opening up new creative possibilities. The discussion extends to potential applications in House of Worship settings and rock concerts, highlighting the benefits of shallow depth of field and the ability to focus on specific subjects dynamically.

(54:49 – 1:00:59): Overcoming Latency Challenges and Rental Options

The conversation shifts to addressing latency-related challenges in remote camera control, particularly when utilizing bonded cellular networks. Norquist and McClave discuss strategies for mitigating latency issues, emphasizing the importance of using set and recall functions for precise camera positioning despite video feed delays. They explore the capabilities of Haivision hardware in achieving low-latency connections over bonded cellular networks and highlight the significance of adapting to varying network conditions for optimal performance.

Additionally, the speakers discuss the comprehensive package offered by VidOvation, including all necessary components for immediate use, such as cameras, lenses, control devices, and connectivity options. They underscore the convenience of the bundled package, eliminating the need for additional purchases or customization to get started with robotic camera setups. The conversation concludes with a discussion on rental options for VidOvation’s equipment, highlighting the company’s willingness to accommodate diverse client needs and explore innovative solutions for remote production setups.

(1:00:59 – 1:06:43): Addressing Latency and Technical Considerations

Norquist and McClave continue discussing managing latency challenges in remote camera control, emphasizing the importance of planning and rehearsal to mitigate potential issues during live productions. They highlight the role of bonded cellular networks in providing reliable connectivity for remote camera setups, acknowledging the need for backup plans and contingency measures to address dead spots or network fluctuations. The conversation also touches on the technical specifications of power over Ethernet (PoE) compatibility, clarifying the limitations and potential solutions for powering robotic cameras efficiently on location.

(1:06:43 – 1:07:04): Final Remarks and Future Plans

As the conversation draws to a close, Norquist expresses satisfaction with the topics covered and invites any final questions or observations from the audience. He ensures that the recording and webinar materials will be made available for further reference, highlighting the opportunity for interested parties to schedule meetings for project discussions. The speakers extend their gratitude to the audience and express excitement about the potential applications of their technology in various production settings, emphasizing the accessibility of their solutions for creating high-quality content.

[Conclusion]

The webinar concludes with expressions of gratitude from both the hosts and the audience, highlighting the collaborative and informative nature of the discussion. Norquist and McClave’s comprehensive overview of robotic camera technology, integration considerations, and technical specifications provides valuable insights for professionals in the broadcasting and production industry. The webinar serves as a platform for sharing innovative solutions and fostering engagement within the community, paving the way for future advancements in remote production workflows.

Full Transcript [Raw]

[Speaker 1]

Good morning, everyone. I’m Jim Jachetta, co-founder and CTO of Innovation. This morning, we have two exceptional guests from advanced image robotics, we have Kevin McClave, co-founder and CEO, and we have Nick Norquist, co-founder, and chief product officer.

 

And we’re going to learn a lot about robotics. And I’m remembering something very important. Cameras are involved, but you’re not a camera company.

 

That’s correct. Is that a good place to start? Like, might be a good place to start.

 

And, you know, let’s keep this conversation interactive, right? We, we, you know, we want to be sure that we’re covering all the bases and that we’re keeping it interesting. But yes, again, I’m Kevin McClave, CEO and co-founder of Advanced Image Robotics.

 

And yeah, that’s typically what we like to start about. See, see our stuff. And what we like to say is that we’re not a camera company, right?

 

We’re a solutions company; we’re a cloud computing company. Really, we’re a software and cloud company with a hardware problem with a, with a camera problem. Or hardware, hardware accentuated, or hardware, hardware-enabled software solution.

 

So, a little bit more about what that means is that what we’re really doing is taking high-end digital cinema cameras off the shelf. So, whether what our go-to market was, it was with the Z cam, which is a great little camera, or you’re looking at a Sony alpha, or you’re looking at a red or maybe a Panasonic Lumix. You can change lenses on these nice, great little box cameras, smaller cameras with great depth of field and great image quality.

 

But what they don’t have is a lot of movement functionality. So we’re taking what you would get in a PTZ, which of course stands for zoom, as we all know. And we are taking, taking these great off the shelf cameras, putting them into our smart gimbal.

 

And then so that they’ve got, they’ve got their super level, you can level them off automatically. You can just plug and play with them. So you’re getting this great PTZ action.

 

And then there’s a bit of cloud magic and tech magic that we do that, that lets you enable, that makes it really, really easy to set up and shoot for live events, as we’ll see here. So really what our platform is, and we say it’s not, it’s a platform and it’s a solution, but really what it is, it comes in three parts. So the first part is this smart gimbal, or this robot, as we say.

 

And we built, as I mentioned before, we built around that digital cinema camera. And then all I need from there is a power supply and an ethernet port. And I can plug, plug in the power, plug in the ethernet and all that backend stuff that folks are, usually have to do for live events, like the, the super expensive switching gear, the the satellite truck, all of these things, we take care of that all up in the cloud and we automate as much of that as possible.

 

So on the front end of our cloud, we have a prep module that Nick will show off a little bit later, that enables you to take a map of your site and lay out where you want the cameras and assign operators to those cameras and lay out all, all the, all the settings for those operators and those cameras, and then push that out, publish it to the individual camera units. So that when your tech gets there, all they got to do is open it up, scan a barcode on, from the display on the back of the camera, and then they’re able to set those up, know exactly where everything goes. And then when you plug it into the system on, it automatically pop, populates to, to those individual camera operators.

 

From there, the cameras can be managed and controlled with an iOS app. And again, the, the people ask, does the iPad come with the kit? Yes, it does.

 

So we’ve got an iOS app that enables you to control those cameras that are on our cloud from anywhere in the world, as long as you’ve got a decent internet connection. And in that app, what we’ve done is we’ve automated a lot of these really, really complicated, really precise movements that normally would take a camera operator years and years to learn. So kind of our logic behind that is, yeah, we’ve got a lot of flying rocket ships these days with, with the touchscreen.

 

There’s no reason, there’s no reason for us to be, continue to, to manually control cameras when we can make it easier and better to machine control cameras. And then, and then all those signals go back up into our cloud where we’ve created this, this ecosystem with this orchestration layer that enables you to use a bunch of different off the shelf apps to do your graphics, to do your switching, to do whatnot, to do that all in one easy place and then stream out to any CDN. So really what this means is, you know, in a nutshell, if we work backwards, IP is the way we distribute high-end video now.

 

And then in the last few years, there’s been this middle layer of all these different apps that have come out and let us do these different production functions, whether it’s switching or audio or graphics or et cetera. But really this front end of live video capture still looks like the eighties and nineties. We’re still using these giant cameras.

 

We’re still using trucks. We’ve got, we’ve got giant cabling. It’s really a big mess in terms of how we set this up and how complicated it is.

 

So the AirOne robot simplifies that front end. And then the AirCloud orchestration layer, what it does is it just takes all of those things, put it in one nice, easy package. So that we make it very, very simple and easy for anyone to do a full pro quality live event production.

 

Let me ask you a loaded question, Kevin. Please do. It sounds like I need a whole IT team to get that network camera connected to the cloud.

 

How do you address that? Do I need an IT team to punch in IP addresses, open ports and get everything talking to each other? We’ll have Nick cover that a bit later, but the short answer is no, that’s exactly the opposite.

 

Nick and I both have different motivations for what gets us up in the morning. Mine is that I hate going to a concert and having some giant hulk of a human being standing in front of me with a camera on his shoulder, blocking my view. Nick’s is in his 30 years of live production, it is having to connect everything and nothing talks to each other and having this huge complicated IT thing.

 

So that’s the whole concept between our AirCloud is that we want to make it plug and play. You pull it out, you pull it out of the kit, plug in the power, you plug in the ethernet, probably in the opposite order. And then everything just works.

 

Yeah, Jim. So the way you guys can kind of think about this in a nutshell is we’ve turned a digital cinema camera robot into an IoT device. So now that it’s an IoT device, we can automate it, we can take all of that cumbersome stuff and complication that comes from setup and automate.

 

So essentially the orchestration layer or the AirCloud makes all of that stuff drag and drop. I mean, it’s literally you say, I want this camera to go here, I want this person to control it, do that. For years, we’ve had this workflow where we have very expensive proprietary devices with a dedicated video table going into another dedicated video device.

 

And that is extremely difficult to automate. And so what that means is if it’s not automated, I have to do everything manual, I have to roll a whole army of people in gear to the site. What we’re trying to do with Air is make it much simpler to do that.

 

Take advantage of IT workflows, take advantage of the things that IP protocols bring, make it plug and play. I mean, that’s been 30 years in the game, that’s been one of my biggest hangups that stuff doesn’t talk to other stuff. So that’s what the AirCloud orchestration layer is about.

 

Well, I think that’s the customer’s biggest fear. And then also VidOvation as a partner of Air. If a vendor’s gear, then VidOvation or Air’s got to send the tech on site to get everything to talk to each other.

 

And then these are temporary events too. Nobody documents what they do. Well, it worked last time.

 

Did anybody make a drawing? Did anybody write down the IP addresses? And Jim, we’ve been able to provide as part of what our cloud service advantage is, is that we can do a lot of that tech support remotely.

 

We had an incident where we were at the World Athletic Championships in Budapest last August, had some issues with the robot on site. Our tech was able to remote in from California and fix it in like two minutes. Yeah, I was there with the robot.

 

It was a new prototype that we were testing in this particular environment, and something wasn’t working. So I got, hey, Zuby, can you remote into the robot and fix the XYZ? And easy as if he were on site.

 

That’s awesome. That’s awesome. I love it.

 

This seems like a good place to segue to the geeky tech demo portion of the show. So let’s do it.

 

[Speaker 2]

Let’s do it.

 

[Speaker 1]

Why don’t we hand over to Nick? I’m already thinking of customers that need this. So my wheels are turning.

 

Yeah, let me do a little screen share here. I’m going to share screen here. So this is our UI that you’re seeing here.

 

So this is running on an iPad. The basic controls for the robot are here. I’m going to control it with another iPad so you can see kind of what the movement there looks like.

 

Essentially, all I’m doing is touching on the touch screen here and you’re going to see a little white dot come on there and moving in a direction away from or towards. Yeah, it basically can just drag the in any direction I want. And the further I move it away, the faster it goes.

 

I also have these recalls down here at the bottom that I can double tap and it’s going to return to those positions with easing. Yeah. Yeah.

 

[Speaker 2]

Let’s see.

 

[Speaker 1]

So before the event, you can set up some preset shots. And you were saying, Nick, when we were rehearsing that, you know, it takes training to run a PTZ joystick that that this this you kind of invented this for yourself. Well, yeah.

 

One of the things that’s always made me crazy about PTZ is that it’s very, very difficult to translate what you’re trying to do with the joystick to your brain and what your eye is seeing. The beauty of this is that actually, let me shift to I’m going to shift to another. You want the camera you want the camera to go down, down, down into the right.

 

You just drag your finger down into the right. You know, it’s exactly I’m going to go to a little over the shoulder shot here. It’s it’s it takes less training.

 

Now, do you find if you have someone who’s skilled at joystick resistant to this, you know, you know, I’m thinking of us older guys, right? You know, us gray haired guys, we’re set in our ways. You know, I don’t I don’t want to use an iPad to change to control my my camera or this product’s not for them or they’ll have to learn a new skill.

 

Well, so we want to be agnostic about what you want to use to control it. So we do have an integration coming with Skaarhoj for their joystick controller. We’re in the process of integrating a PlayStation controller right now.

 

If you want to do that, we have some other hardware controllers coming in as well. But this is one of the beauties of it. Like literally, literally every single PTZ operator that comes up to me at NAB says, I got to have a joystick.

 

And I say, OK, you can have a joystick. No problem. But once you use the touch screen, you’re never going to want to go back.

 

Because basically what happens is you guys, you’re not seeing the whole hold on. I got to show. Yeah, we’re we’re seeing your control your control surface.

 

Yeah. Let me you you had something up, but it was like we were seeing picture in a picture in a picture. But we could see what you were doing.

 

Let me share this other. You need a camera over it. You need a camera over your shoulder.

 

Break out another robot. Put it over your shoulder. We’re flying in.

 

I got a second. I got a second robot. So there you go.

 

So now we got the now we got the over.

 

[Speaker 2]

Yeah, it’s a little.

 

[Speaker 1]

Oh, now it’s in focus. Perfect. Yeah.

 

There you go. Yeah.

 

[Speaker 2]

Yeah.

 

[Speaker 1]

So basically you can see this is the test. Let me come wide here so you get a little bit more context so you can see I can go in any direction and do easing to a stop by just kind of moving my finger back to that point of origin. And then you have the other things.

 

Your zoom is there on the left. That’s your zoom. Yes.

 

[Speaker 2]

So. So. So.

 

So. Oh, yeah.

 

[Speaker 1]

  1. So one one finger is running the zoom and one is running the position. Correct.

 

So and again, we’re going to have other control modules for that as well. But we’re we’re we’re a Skaarhoj partner as well. So so that that’s perfect.

 

VidOvation and air. We could bundle the whole thing together if you if you insist on having a joystick. Yeah.

 

And the beauty of the Skaarhoj stuff is they’ve already integrated the Z camp so you could have your complete controls for everything all in all in one. Double tap is generally doing something automatic, triggering the spot autofocus, you know, double tap to turn on and off the auto ISO on and off your white balance. In general, a a double tap is telling it to do something automated.

 

So double tap is returning to a position there. Double tap is doing a slow move to that next position, whatever that might be. You can interrupt any of the calls at any time by either double tapping or or just grabbing control of the screen.

 

Go back to your go back to your product of the year awards there. You got a couple of NAB awards.

 

[Speaker 2]

Oh, yeah. Yeah.

 

[Speaker 1]

Look at that. Look at that. Two years in a row.

 

Two years in a row. We got it. We got it for the robot in twenty two and for the air cloud in twenty three.

 

Oh, Nick, you forget the most important one. Oh, yeah. The the innovation award.

 

Hang on. Stand by. Keep talking.

 

I’ll go drum roll. Drum roll. Go get it.

 

Yeah. So generally, when I set these recalls up, because you can control the speed, like say I have, you know, say I have this position here I want to go to, I can come in there, set my focus, long press on the button and I can set the now what’s going to go back. And it’s a little little thumbnail.

 

So you see what what the shot is. That’s cool. And I noticed that some of the some of the presets went very quickly.

 

Others, if your camera’s live, you can have slower transition, pull out and in and, you know, the TD or the director has the option of keeping you live while you’re slowly panning and reacquiring. Yeah. If you’ve ever done a try to do a live move with a PTZ camera, it’s it can be it can get very scary or frankly, impossible.

 

Yes. You got to zoom out to acquire and then zoom in, you know, especially if you’re going for from one tight shot to another. Right.

 

You got to you got to look at that. It’s on it’s on it’s on a wood wooden plaque that makes it really, really special. Awesome.

 

So, yeah, this if you don’t know about the Technology Innovation Award, generally these things go to some giant corporation or research institution or whatever. The Dolby Labs, Dolby Labs. Yeah, yeah, yeah.

 

It’s a it’s kind of it’s kind of a big deal. Yeah. Yeah.

 

So we’re the only as far as I know, we’re the only startup to ever have won that award. Kevin, you’re a great presenter. I think Vanna White is going to be a loser over there.

 

Oh, so, Jim, one of the other things. So this by taking this camera and making it an IoT device and making all the controls happen over IP, we can also do a bunch of stuff that you can’t really do with a regular PTZ because in this robot is a brain. There’s there’s a CPU in the back here.

 

So we can do all kinds of really interesting stuff that you can’t do with with a regular even a regular PTZ. So you’ll notice the speed at which it’s turning right here. Watch as I zoom out.

 

It’ll pick up speed and go faster. Watch how it changes the speed when I move from one direction to the other. Yeah, you you were saying that I again, I’ve done some volunteer camera work at our local church.

 

And, you know, when you’re when you’re when you’re, you know, for an effect, you know, like zooming in, zooming out instinctively, right. You go fast at first and then you kind of slow down or right that that that’s kind of the the the how a human would do it. And the robots miss that.

 

Right. And you’ve programmed that in. Yeah, I did.

 

I was on the over the shoulder. It wasn’t showing speed. Let me do it again.

 

So here under the zoom button, we have something called tele-damping. It’s one of the things that we patented. And what happens is because we have that computer brain in there, it knows the focal length of the lens.

 

So when you’re on the long end of the lens, it actually slows down the robots movement. And it’ll do that dynamically. Also, because when you’re on a tight shot, if you move things too quickly in out left, right up down, you’re going to throw up.

 

You’ve got you. Yes. Now a skilled a skilled camera human operator will know that don’t don’t make any sudden movements.

 

You know, the worst is when, you know, in a church environment, you got a speaker on stage and they’d be like, Pastor, so and so’s a runner. He’s going to be going left. He’s going to be going right.

 

He’ll go left and then he’ll do a head fake and go right. So and you but be tight on his face, be tight on his face. You know, so catching that action.

 

If you move too quick, you’re going to make people throw up or you got to zoom out till till you till they stop moving. So you’ve thought of all these things. Well, it’s kind of addressing things that have, you know, made me crazy for years in in production, like, you know, just the complication of using a PTZ.

 

You know, the fact that I have a mechanical electromechanical device that I can’t really control very well. I can’t I can’t set it up to do easing, for instance, you know, that’s in post production, we go in, I set something up, I want to do a nice ease to a stop with some kind of motion. Super easy for me to set up, right, just put keyframe in there, say ease that I can’t do that with cameras.

 

So that was another one of the things that we’re, you know, we’re trying to tackle here is like, okay, I have a remote have a movement that I can program from one point to the other. But if it’s doing this, that’s very robotic, it doesn’t look like it’s a human camera operator. It’s not the way a camera operator would do it.

 

You write, start fast, go slow, and then he used to a stop. So that’s what we’re doing with the robot. When you do those recalls.

 

I don’t know if you saw it, but it will ease to a stop. Some of them had the nice ease some of them was almost like a quick cut. But you program that in some shots, you know, it’s a critical thing, we need to snap to that.

 

Otherwise, we’re gonna miss, we’re gonna miss the layup, we’re gonna miss the slam dunk. So you need to snap to the backboard quickly. But other shots, you want to be more fluid and be more like a human.

 

And this comes from your production background, Nick. I mean, you’ve seen the limitations firsthand of generic PTZ. So that’s why you’ve put this into this product.

 

Frankly, Jim, it’s just dumb that we can’t we haven’t been able to do this before I look at this stuff. And I go, like, why can’t that thing talk to that thing? Why can’t this do an ease, ease to a stop?

 

The technical capabilities are there, but nobody has really gone in and done it before. And that’s what we’ve been doing for a couple years now. And now I know why nobody’s done it before.

 

It’s because it’s not so easy.

 

[Speaker 2]

It’s not so hard.

 

[Speaker 1]

But that’s great that you patented it and protected yourself. That’s kudos to you guys. That’s amazing.

 

That’s amazing. So Kevin showed us the what’s the planning tool again? Oh, you want to see the air cloud?

 

Yeah, the air cloud. Yeah, that that intrigues me. People have been, you know, people I’ve been mentioning you guys to certain customers, other film and production people that I know, they’re like, Why do I need to put a camera in a virtual space?

 

Are we doing virtual reality? Or it’s just a planning tool? Yeah, so it’s, it’s essentially a planning tool or an orchestration layer that that lives on top of everything.

 

And again, it’s kind of a drag and drop scenario. So let me, let me share screen again, here real quick. There we go.

 

So you got the web browser now? Yes. Okay.

 

So these, these can be different events, different projects, we see here, you know, you got different things going on. Along the left, we did, you know, NBA Summer League, we did a corporate BC event, we did NAB with AWS and Panasonic and our own booth and Z Cam. We did the World Athletics Championship in Budapest.

 

We did a UFC fight in New York. And in Vegas, we have the final four coming up here soon. So basically, how we use this is as a setup tool for coordinating the production.

 

So this overview, the different the different projects are here. This overview tab just gives me a basic all the documentation for the shoot when it is the details about it, I can put various documents on there so that all of my crew can get in and look at it. I don’t have to send it individually.

 

If I need to dynamically update it, I can just go and swap those things out. I can get with the crew, your production notes, your shot list, etc, etc. When the schedules what whatever it’s it’s a it’s a it’s a portal for everything.

 

Yeah, so that’s cool and everything. But that’s not really what’s exciting about it. What’s exciting about it is the automation.

 

So this is what we call our shoot setup panel. And I’m going to delete a couple of things off of here so I can add them back. So what you do is you you upload a site plan for your shoot.

 

And then on the left here, you have your crew and your gear. And you can drag and drop those things to deploy them. So say, for instance, I’m going to stream to a vMix instance that’s running in the AWS cloud, I can drag and drop that to my instance pane here.

 

And now that destination is deployed for all the robots that I put out. Let’s put Rory on here, I can take him and so I want him to go to my question before, you know, setting up the IP address connection between that camera and that vMix. You brought you drag that out all these cameras on the screen are on automatically on inputs on that vMix.

 

So they all know where they’re going now, because the the destination IP is there for control. And when I take and drop Rory on here, and let me put, let me put Jeff on there as a camera operator. Now when I come in here and give Rory a camera number, I’m going to auto update this and watch what happens to this IP port number down here as I change the camera number.

 

So it’s pre configured so that port 5010 is going to take the camera 10 input in vMix. And you know, that’s the vMix protocol, you put you put the IP of the of the service. And then 500012345 10 are the camera numbers.

 

It’s that simple. Correct. And again, all that’s automated.

 

You don’t have to do any of that. All you have to do is say I want that to be camera one. Yeah, I have a camera one in here.

 

We’re just peeking under the we’re just peeking under the hood. So so you to prove that you know, what’s going on. So So again, maybe for troubleshooting or whatnot.

 

So you see the MAC address, I assume that’s the MAC address of the of the device. And it’s all automatic. Wow, I can change what you know what I’m streaming at what I’m recording at internally, because the robots will will record internal to a CFS card stream simultaneously also have an HDMI out that you can convert to SDI or fiber or whatever you’re doing.

 

So baseband switch, all of that stuff can run simultaneously. The cool part of this is now that I’ve programmed this and I’ve saved my I’ve saved my deployment when it comes into the day of the shoot, and it’s time for Jeff here to launch this camera. He comes in here to launch pad.

 

And he goes, Okay, I’m camera five when he double clicks this on his iPad, I’m gonna show you here on the computer. But on the on the iPad, what you would see is through deep linking, it’s going to come in and populate all of those settings, it’s going to populate the IP address of you can see there, that’s the IP address in the vMix instance for camera fives input, you can see the the frame size and frame rate for both internal recording and streaming are all populated. That’s all all data.

 

All data in the URL to launch it. Yeah, so what it does is it actually pushes all of that stuff to the app. So all Jeff has to do is say, Okay, that’s my camera, put it in, you know, air station slot one, or whatever.

 

And then when it comes time for him to stream, he doesn’t have to enter in any IPs. He just goes, go. Now when when Jeff logs in, he’s got credentials, and he’s only allowed to control that camera.

 

So there’s no confusion that he’s suddenly controlling Bobby’s camera accidentally. And then and then you comms are integrated. Well, maybe I’m jumping ahead.

 

But the comms obviously, so so you’re the director today, Nick, you need to talk to Bobby, are you ready? show starts in five minutes. You know, take your bio break.

 

Now you take your bio break. Yeah. Yeah.

 

So you can put basically the idea for air cloud is that it’s an orchestration layer that can work for all kinds of other apps you’re already working. You’re already running unity, you can use that if you use discord or whatever you’re using for comms. It you can continue to do you support others besides these three?

 

Or more?

 

[Speaker 2]

Yeah.

 

[Speaker 1]

So ideally, there are you can add a new comms here and we’ll have the ability to put other particular whatever your preferred destination is. This is just basically a link to whatever portal is already there for you. It just a one stop shop.

 

And like if it’s, let me ask you if it’s like SIP protocol, theoretically, any comm could could integrate or you know, or you’re working on other hooks. We are working on other hooks. The big ones right now are unity, we have sync stage built in the big advantage of sync stages, it’s actually built in the control app.

 

So on the iPad, one pane of glass, you click your little headphone down in the lower right hand corner, and the camera operators can automatically link into the sync stage session. And the beauty of sync stage is that it’s super low latency. This was originally developed for musicians to jam together during COVID.

 

And obviously, to do that, you have to have super low latency. And it’s like 2535 milliseconds. It’s literally like having a conversation in real life.

 

[Speaker 2]

Yeah, yeah, yeah.

 

[Speaker 1]

A lot of churches were able to leave, you know, you sing, sing harmony and instruments. You know, it didn’t matter if you two miles away or 1000s of miles away, right? You you because the human eye is more forgiving for synchronization problem, human ears, we hear, you know, so you know, it actually can be painful to listen, right?

 

If, if music is out of sync, right? Yep, absolutely. Yeah.

 

So we also have the viewers here. So you have a live program feed that you can hit where other crew members can can drop into the live feed or the off air confidence monitor or a multi view, the multi view sharing is super handy. Because if you’re in different places, seeing what your other cameras are shooting as a camera op can be can be super helpful.

 

And then this is our master status dashboard. What this is is real time data about whatever whatever camera is doing in real time. So by some someone more technical, someone more technical will want this screen, the multi view could be for the producer, executive producer, just making sure, you know, looking at the big picture.

 

Yep. So you can see I’ve when I start and stop record here on these cameras, you’ll see those toggle on and off. When I start and stop the streams, you get the same kind of notification.

 

So it’s super handy for a TD, in particular to be able to not only monitor what the what the cameras are doing in real time, but also I use it all the time when I’m doing shading. So like say I have Carlsbad here and Joshua are both up ones of 4200 ones of 4800. The tents don’t match.

 

So I can go Oh, that’s why camera one and three don’t match is because they’re not on the same. You know, they’re not on the same white balance value. So I find that super, super handy as a new shading.

 

The grass, the skin tone doesn’t match from camera to camera or the grass on the field. Why is camera three look yellow? Camera two looks blue.

 

You know, the grass looks yellow or blue or something’s out of whack. Yeah, yeah, yeah. Yeah.

 

So you can tweak it right there in real time. Yeah. And I gotta say that is one of the one of the really nice discoveries I had when I was doing our one of our early shoots where we had six cameras shooting a rugby match.

 

It took almost no time for me to set up to chip them all to match the because the Z cams all even if you’re using slightly different models, the color profiles are all the same. Super easy to predict.

 

[Speaker 2]

Yeah, yeah.

 

[Speaker 1]

And then and then you usually the the video engineer who’s on site first, the camera operators show up later. So the video engineer points can put all the PTCs aim at the same object, you know, with the, you know, instead of, you know, or I could put a chip chart somewhere and aim everything at the chip chart and get them all get them all locked down very quickly without it.

 

[Speaker 2]

Yeah.

 

[Speaker 1]

The important thing without getting up from his chair or he may not be on site. He’s probably not on site. So he can’t go and aim the camera that that’s another beauty of well, he’ll have to get someone on site to put a chip chart, you know, out on the field or but, but, but that may not be necessary.

 

Just shoot at the same object, make sure the greens match. And you’re done. Yeah.

 

As long as those values match, the cameras are going to match. And you can control up to from one iPad, you can control up to six robots and switching between them is basically instantaneous that master status dashboard can run from anywhere in the world, quite literally. The whole system is designed to be able to remote control.

 

So you could put, you know, you’re doing a six camera shoot instead of rolling a whole army. You could put a couple of smart people on site to set it up, make sure the networking all is solid. Everybody else can be remote.

 

Your camera ops can be remote. Your TD can be remote. Your director can be remote.

 

We’ve even done stuff with the audio tech being remote as well. So it’s super flexible in that way. On site, I mean, theoretically, all they need is an internet connection, a router with DHC.

 

Do you need static IPs? Or can you use DHCP? So just plug the network network in and if it hits the router and there’s an internet connection, that’s it.

 

You know, literally plug and play. I mean, if they insist on static IPs, hopefully they have an IT person to set that up. But that is possible if the production prefers static versus DHCP.

 

Yeah, it’s actually dead simple to do a static IP. Let me show you. 56, I don’t know.

 

We don’t have it running on that one. Hold on. Yeah, I mean, you would need the, you know, the IT department would give you the IP, give you the mask, give you the gateway.

 

Well, you’ll get it all when the robot originally logs on. Let me see. I’m not sure.

 

Oh, I do have it running here. So this IP configurator. So right now, this is what the robot’s running.

 

If I wanted to come in, let me check and make sure. Well, you don’t have to break it. But you know, you can you can show us that there it’s in the settings.

 

If the customer is comfortable, just you know, DHCP boom, just let me try and break it. Let me let me try and break it, Jim. Okay, cool.

 

Go for it. So I can come in here and say, you know, the subnet mask in the gateway, it’s got because it got that when it did its DHCP. So I can set that static IP.

 

Now that particular robot is that is on port 2001. Hopefully it will come back once it’s done its little once it’s done its little reboot there. So there we go.

 

[Speaker 2]

It is you’re connected.

 

[Speaker 1]

You change reset to DHCP. Yeah, so you’re you’re flushing the neck, the neck, you’re telling the neck to go go get a new DHCP if there’s a problem. I can’t you know, something’s wrong with the network.

 

Yes. Easy peasy. Yeah.

 

The other thing. The other thing I should mention that’s super cool is the one of the big challenges that anybody who’s done, you know, any kind of trying to do anytime kind of remote stuff at a big corporate site or, or something like that, that has a very restrictive network security stuff, like they’re not gonna let you put a router in there, they’re not gonna do any, the way that the back end of this works is we essentially create a VPN between the robot and the control iPad. And through that connection, it will actually if it can negotiate either end, it will actually drop point to point as well. So you don’t have to go through the the intermediary, if you will, to to connect.

 

It’s designed like I you know, I came into this, I didn’t really know anything about it. I knew I plugged stuff in. So the boys have all educated as we’ve gone along.

 

And it’s, it has been extremely helpful to be able to remove a bunch of that complexity with, you know, with networking, because I’m, you know, I’m not an IT engineer, but I can go set this stuff up, I can essentially be the engineer on charge for a shoot, even though I don’t have a big background. Yeah, yeah, no, no. And don’t don’t sell yourself short.

 

You changed to a static IP there. Kudos. So you get, you get your you get your networking certificate.

 

Yeah, well, here’s the thing, Jim, this stuff has got to be more user friendly. Like, yeah, we have a big labor shortage, because the muscle memory to, you know, remember how to do a particular shot, like, years and years and years of experience to hit those things every time. Like I got, I got 30 years of shooting.

 

And I miss shots sometimes, just because, you know, I’m more of a director producer role than a camera operator these days, right? One of the things I love about this is the shot I can envision. I just that’s the shot I want robot you, you do it.

 

Same thing goes for the IT setup. Like, I just want this thing to talk to this thing. Do it.

 

Yeah, yeah, yeah. Well, one question I have your your the productions you do, do they tend to be all robotic? Or is it a mixture?

 

You know, will a customer say I’m doing a live event, and I want to robotic and the rest of my production is, is traditional? How do you can you mix those two worlds together? Absolutely.

 

We just did a shoot last Friday with flow sports. And that one was I hope I’m allowed to say we did a shoot with low sports. Oh, violated your NDA.

 

Oops. I don’t think we have an NDA with it. So yes, we did.

 

We did an awesome shoot with flow sports up in Orange County. It was their one of their every couple of months they do these big jujitsu tournaments for flow grappling. They had that designed as a I think an eight camera shoot originally or six camera shoot.

 

We had a call with them and they said, Hey, why don’t you come up in a couple of days and integrate your integrate your robots into our into our shoot. So we went up there, but one up like directly in the grid directly looking down on the on the mat itself. Another one up above the where the where the athletes were entering and ran them back to the truck that they had.

 

So you know, you’re we want to play nice with everything. We’re agnostic about what tools people Yeah, so then in that in that work, because that’s my next question. How do you mix cloud with physical?

 

In that case, you use the SDI out of the robots and brought those to the truck. So everything was hard, you know, the hardwired more more traditional workflow, but with the with the added benefit of a cinematic PTZ camera. Yeah, okay.

 

So you can bridge both. I guess if I wanted non robotic cameras in my workflow, I’d have to bring all the cameras to the cloud. Can I bring a non air camera into your planning tool?

 

Is that absolutely well, you can’t a non air camera, you can’t do the deep linking because it doesn’t have the brain in there that’ll write a data behind it. But yeah, right. Yeah.

 

No, if you have a regular camera, it’s the air cloud just takes an SRT stream. So if you have, you know, a high vision, you know, or a Makita or something like that, and you’re doing an SRT encode, you just send it to that, you know, you have to manually go and send it to that particular port on the VNUX instance or whatever. Our partner high vision owes you 50 bucks for saying hi.

 

Well, they invented SRT. But if you said another brand, I would have interjected. I mean, there are other brands of SRT.

 

But yeah, so you could do a low latency SRT feed from your non robotic camera, bring that to the cloud instance of vMix. And that’s how you would bridge the two worlds together in the cloud. Correct.

 

[Speaker 2]

Okay, correct.

 

[Speaker 1]

So you can do you can go directly from cameras to a truck on site through baseband. You can go directly from the robots. Again, the robots use SRT as their transport protocol as well.

 

So that’s SRT going from the robots to cloud based switching, you can take and if you have another camera, we had a some other, you know, a high vision encoder on there, you can program that to hit our air cloud as well. You can also if you have your own infrastructure, like say you have a Remy network operation center that you’ve already built, you can also tell the robots to hit that instead. So it’s flexible kind of to do whatever you need to do with it.

 

I’m just thinking out loud, if this is a crazy question, you can shoot me later. But do you have like an open? Do you have like an open API where if I did want to bring it into some pre existing ecosystem, that that is a possibility?

 

Absolutely. So yeah, we have an open API. There’s some other ones that I am under NDA and can’t talk about, but that are doing that exact thing where they they want the robotics, but they have a deeply entrenched system where the robot needs to play nice with what they’re already doing.

 

Copy that. Copy that. No, this is this is great stuff.

 

Great stuff. So I’m going to say it on air now. So you promised that innovation can have one of these cameras in our booth at NAB.

 

We’re going to make that good. Yeah, we’re in booth W2230 in the West Hall. You said you’re going to be in the Broadfield booth at NAB?

 

Or you have your own? At Vegas, we’re going to be in the Propel ME startup space. So over in the I forget what it’s called.

 

Now. There’s an innovation. Yeah, pavilion.

 

Yeah, pavilion.

 

[Speaker 2]

Pavilion.

 

[Speaker 1]

Yeah, yeah, yeah, yeah.

 

[Speaker 2]

Yeah.

 

[Speaker 1]

And you said it’s in the West Hall. Yes. So so Nick’s we’ll include that in innovations marketing for NAB where where you can see this up close.

 

Were there more slides, Kevin, that you wanted to show? I like that color you added background as Nick was talking. Did we did we cover everything or you want do you want to show some of the use cases?

 

Yeah. Excuse me. Yeah.

 

Sorry. Got a bit of a cold here. Yeah, we can do that.

 

I was just throwing up some of Nick started talking about. About don’t don’t violate any NDAs. No, I shouldn’t.

 

About what we’re doing with Flow Sports the other day, and really, it’s kind of a great example. So if you got the UFC here, this is a shoot we did last November at Madison Square Garden, where, you know, to me, this illustrates really what we’re doing here on the front end, at least with the cameras. So this is the way we should say you’ve got guys with cameras over their shoulders or still cams up at the face.

 

We’re standing on platforms. Don’t tell me that’s not a liability if somebody steps out back of that. But these are the expensive billionaire seats here.

 

Right. And these guys are blocking views. This is why I get up in the morning.

 

I want that person to be in another room. Yeah. Yeah.

 

Yeah. The small profile little robot popping up over there is far less obtrusive. But but then you got to pay each of these guys per diem for hotel, airfare, food.

 

You know, you can imagine someone gets somebody gets COVID. They can’t travel. They get sick.

 

They could they could they could shoot from home. Right. Yeah.

 

It’s a potential solution as well. But here’s what we had. We had we had two of our bots up in the grid flying Sony alphas.

 

We’ll be announcing this this product line probably in about two, three weeks. So you’re getting a bit of a scoop here. But and then you’ve got the op in the back room, kind of over in the green room area, running those cameras and just getting these amazing shots that you can’t get any other way.

 

Wow. Wow. Yeah.

 

And that is really for that for the tier one productions, like tier two, tier three, the not having to roll the truck, not having to bring a whole army is a massive enables you to shoot stuff you couldn’t do it profitably for the tier one stuff. They’re still going to roll the truck because they have the budget. But what we do there is we enable them to put cameras in places they wouldn’t otherwise.

 

Like when we did the NBA Summer League, we went out right on the scorer’s table. Can’t really put an op there, you know, because they’re going to be blocking again to Kevin’s point, right, blocking people’s eyes. We’re doing a thing with monster trucks where you can’t put a camera up there because it’s a little dangerous.

 

So the the flexibility that the system brings is one of its real values. So I see a lot of sports that that’s that’s an obvious play for this. We see a lot of we’ve done a couple of projects, cinematic cameras for House of Worship, you know, everyone, you know, that’s a form of, you know, that’s that’s kind of some of them.

 

It’s a broadcast workflow, but they want the shallow depth of field. You know, you got is the same thing with like a rock concert, right? You got singers up front, then you got guitarists that are a little further back, and then the drummer behind them, you know, they want the singer only in focus and the drummer blurry, right?

 

Yeah, you know, just that, you know, then when they zoom in on the guitarist, the singer in the foreground is out of focus, right? That’s what we want. And if a church or a house of worship can’t afford doesn’t have all the operators or the skills to for $10,000 this is a I mean, that you’d be hard pressed to get a camera body alone for 10k, right?

 

[Speaker 2]

Right.

 

[Speaker 1]

That’s without the control without the cloud. Yeah, so we are doing this is the other thing. Go ahead, Kevin, go ahead, Kevin, you first.

 

Well, I was just gonna say this is the other thing that’s made me crazy over 30 years, I buy a piece of kit, I get it home, I plug it in, I’m missing some part I need to make it work. I got to order that. Wait, several days or back in the day, wait weeks for it to come in.

 

You don’t have the lens without the adapter to put the lens on you thought that would fit whatever. Exactly. So with the air one, you take it out of the box and shoot with it.

 

It includes everything you need to get the robot get the camera, you get the lens, you get the zoom motor, you get the ring that goes on the lens, you get the quick release plate, you get an iPad that comes with it for control, you get the dongle that you need in order to hardwire your your iPad into the network, you get the power supply, obviously get software and all of that other stuff. You do need to supply your own CFS card to record internally and whatever other network ever you’re going to attach to, but you can take it out of the box and shoot with it immediate. And it comes in the case.

 

That’s another thing too. It’s like you spend 100 grand on something and it comes in a cheap corrugated box. You know, and then you have no way to ship it.

 

You got to make a case for your $100,000 camera. This is all you thought of everything because you’re productive guys, right? You were there.

 

Okay, it’s my turn, Nick. I do the arm wave and sail thing. I like to say we’re like the Ginsu knives of pro AV, right?

 

So it’s now how much would you pay? It comes with this, it comes with this, it comes with this, everything. So you got your iPad, you got your robot, you got your face camera module, the Z cam and E2M4.

 

It comes with the Panasonic lens, all your connectivity, your quick release plate, three months of SaaS service. And the coolest thing that I love about this is, let me, look at this. It’s FAA legal.

 

Well, it is. Let’s just say I’ve flown all over the world with it as a, as a carry on. It’s a little oversized, little too fat, but you know, you’re not going to get in one of those little puddles, you know, those puddle jumper planes where they, Oh yeah, you can bring a carry on.

 

And your sandwich doesn’t even fit. Oh, you know, I travel with a big backpack with my computer and like, I got to sit, I got to sit on it. You know, there’s a question in chat on whether or not there’s a rental options on this bundle.

 

Does VidOvation do rentals? We do. We’re new to the airline so that, that, that is a possibility that, that Vidovation would be, would be renting this.

 

Matt, we can get back to you on that. You know, we’d like to know, you know, how long you needed that kind of thing. What we could work something out.

 

We, we rent a lot of the bonded cellular market, big, big rentals. Nobody, most people don’t buy the stuff they want to rent it. So it makes sense.

 

We rent some of our microwave wireless gear as well. So why not rent the air? Well, you bring up a good point there.

 

We’ve done some testing with VidOvation in the UK using their 5g network for connectivity, because the robot ethernet plug, it doesn’t care really how it’s connected, as long as it’s a solid internet connection. So you could take and plug a 5g modem into there and stream. Yeah, we actually, that’s a good point.

 

We should try this that the the PGA right now uses our bonded cellular, it’s a high vision product. And thank you for mentioning high vision.

 

[Speaker 2]

You did your homework.

 

[Speaker 1]

So so we use the high vision mobile encoders are bonded cellular. And obviously, that brings the camera video back to St. Augusta to their master control. But we have a data bridge function where we bring, for lack of a better term, it’s a bonded cellular VPN.

 

And it does have like 150 milliseconds latency. Is that would that be a challenge if you’re controlling the camera? Or you just have to make slow slow movements?

 

Well, it depends on what you’re doing. So latency is the enemy for doing any kind of real time.

 

[Speaker 2]

We wanted zero, we wanted zero. Ideally.

 

[Speaker 1]

Yeah, yeah. So you know, if you’re at about 150, generally, that’s okay. If you’re up around 300, that’s just too slow to do manual movements.

 

But what you can do in those situations, like, you know, we have the speed of light to overcome as well. Like when I was in Budapest, and Zuby was controlling the robot from here. That’s obviously, you know, halfway around the world.

 

So he’s going to have some latency. So he’s not going to track a sprinter in real time. But with the recalls, if you want to go to a specific position, that happens instantaneously.

 

So that command feed for robot go to this position that flies, it’s the video feed coming back that has the lag. So there’s all kinds of scenarios that you could conceivably use that in where speed of light might be, you know, a hurdle to overcome, but you could still use the the set and recall functions. Well, then also, I’m an operator and bonded cellular.

 

Now we can go down to sub 250 300 milliseconds over bonded, assuming the connection can, can sustain that, you know, but but the hardware, the high vision hardware can go down to every six months, they’re lopping another 550 milliseconds off, I think it’s 250 ish 300. Because the so the data connection might be quick, but the video he’s watching is a second old. Well, how do I control that?

 

Yeah, well, or the thing?

 

[Speaker 2]

Yeah, yeah. Yes.

 

[Speaker 1]

The cool thing is the video flows over the data connection, not over the video connection. So if you’re in sync, so, so if you’re and this is one of the, one of the really cool things about it is the robot actually produces multiple streams at once. So the stream we’re using for control is an MJPEG stream, which is fat and ugly, but it’s fast.

 

The stream that you’re sending to the switcher is actually a different stream. It’s an h.264 h.265 and an SRT wrapper, which has delay anyway, as part of the error correction. So the thing, the speed that’s going to the switcher, that’s going to be delayed anyway, that really doesn’t matter.

 

If it has a few hundred milliseconds of lag, it’s the data connection for the real time control that I need to be really with, with, uh, you know, and, and the fidelity of the, of the video going with the data connection doesn’t need to be super great. Just, he’s following the basketball player as he’s doing the layup. Yeah.

 

You know, that, that, that can be a little soft. So, so that’s factored in and you know, customers just have to learn like, um, you know, bonded cellular nowadays, particularly that the high vision, uh, implementation is getting more and more and more reliable, but there could be a dead spot. So in your production, you got to learn like, you know, camera one, you know, get out, get Tiger Woods reaction camera to, uh, uh, well, Phil Mickelson is, is, uh, he’s in a different league.

 

I can’t keep track. Yeah. Camera to get, get his reaction.

 

Three go wide in case we got a dump to you or, or if you are trying to track something, you’ll have to do a little bit wider shots to, to offset the latency, right. That you won’t be able, you’ll have, you can adjust your production to circumvent some, some, some limitations due to latency. Right.

 

I mean, is that right? Yes, absolutely. Yeah.

 

And you want to practice, do a rehearsal, you know, I mean, scouted ahead of time. Like I can’t, you know, this is one-on-one stuff. Scout it, test the connections, all of that.

 

Um, we had a bit of a patient, a Peplink partner. So maybe the bonded cellular video product may not be appropriate. Uh, we specified a Peplink router with 24 modems for one customer.

 

It sounds crazy. 24 modems, some 4G, 5G. So theoretically we could put this big fat, uh, uh, gateway at the venue to get those cameras to the cloud via SRT, uh, you know, connect to vMix.

 

So there, there, you know, I guess every project is different, right? Someone very smart a long time ago. It’s like, don’t design the system on the phone call, you know, just, you know, take notes.

 

And, but you know, we’re just throwing out hypothetical hypothetical scenarios that, uh, this is part of the value add that we bring, you know, you guys have made an amazing, uh, well, I was going to say amazing camera. It’s not a camera. You’ve made an amazing robotic ecosystem.

 

Uh, innovation as a systems integrator value, added reseller. We can add, you mentioned scar Hoy for joystick control. We can add that piece.

 

Uh, if bonded cellular makes sense, we can add that. If, uh, you just need an internet connection, peplink may make sense. So, uh, it’s all great stuff.

 

One of the things that’s super cool about the scenario that you bring up. And actually one of our very first tests in the field was that, uh, the U S open here at Torrey pines. A couple of years ago, we took one of the robots out, set it up there on 14 and, you know, went back to the truck and controlled it.

 

They’re just like kind of a proof-of-concept thing, golf and those kinds of, uh, events that require a massive amount of setup. Imagine a future where instead of having to pull miles and miles of cable, instead, you’re putting those bonded cellular modems or, you know, 5g with, you know, slicing so that you have more uplink. The robots can also run on, on battery power.

 

You know, AC is our default, but if you wanted to take, you know, a V mount, it’s a V mount, you know, D tap out to limo into the robot and you can power the thing for, you know, six, eight hours. That way, when we did, you do have a standard, you know, industry standard, um, limo, uh, power spigot. Um, you know, it’s not some, some weird pin out it’s standard.

 

Yeah. It’s a standard standard, standard two pin limo. So you can plug your, um, so, so adaptor to two pin limo, boom, you’re, you’re running off the power.

 

Yeah. When we did the rugby thing where we needed, like, I wanted to the cameras in the end zones, I didn’t want to have to run power all the way out there. So we put like those anchor battery packs, you know, the little 12 volt pack.

 

So it’ll do 12, 12 to whatever the range is for V mount. So from like 11.2 to 15.6, you can plug into it. It doesn’t have to be, it’s got the extended range to support the changing battery voltage on one of Kevin’s slides.

 

I saw that mentioned power over ethernet. Can, can this does, does it, uh, draw, uh, as the wattage consumption power compatible POE or POE plus or not really? Cause what it really needs is it needs POE plus, plus, which is a, you know, that is not really super common out there.

 

It’s kind of exotic. The other problem is, is if you have multiple cameras on that same POE switch, it’s going to overload it. So that’s one of the reasons why we decided not to integrate POE.

 

Your, your iPads are, um, the, the, the package chips with a dongle. That’s a POE dongle that you plug ethernet in one end, the other end goes into your iPad. So the iPad stays powered all day.

 

[Speaker 2]

Okay.

 

[Speaker 1]

So you’re not worried about the, the iPad running out of battery or having to charge. It’s getting okay. POE included in the kit.

 

Cause I don’t want to get out there and go, Oh crap. I, I, Oh, well, well, yeah. And also cause Apple now changed the USB-C.

 

I don’t have, I don’t have a lightning adapter anymore. Oh no. Uh, yeah.

 

It’s actually funny that you mentioned that cause we just shifted over from the lightning iPads to the USB-C iPads specifically for that reason. But on Kevin’s slide, I could have a POE plus, plus injector. Theoretically backstage your, your camera, ethernet port will accept POE power.

 

If it’s the right, that doesn’t, that doesn’t matter. If you plug in a POE, if you plug in a POE ethernet into there, it’s not going to make any difference to the robot. It’s not going to power it, but you can be sending one.

 

What you could do is you could do, say I have a, an ethernet running to the robot. I could put a POE injector POE plus, plus just on that one. And on the receiving end, do a breakout that’s going from, you know, ethernet.

 

[Speaker 2]

Okay. Okay.

 

[Speaker 1]

So it’s split. So you need it. You need an injector, uh, near the switch and then you need, uh, a device to split it.

 

And you have a, uh, is that included in the kit or would that be an extra? It is not. Okay.

 

[Speaker 2]

It is not. So, but it exists.

 

[Speaker 1]

You could split it, you know, basically it’s a little dongle that ethernet and then the two pin Limo power. And now I got one cable going to the camera instead of trying to find an outlet. Okay.

 

Theoretically that’s totally doable. I, we haven’t done it, but we’ve had on paper on paper, the parts are available to do it. Yep.

 

Very cool. Very cool. Um, I don’t see any questions.

 

Um, Kevin, do you, did you, did you cover what you wanted to cover in the, in the deck there? Um, you’re good. I think so.

 

Um, I think we’ve got some pretty whiz bang stuff there. Is there any, any, anything else, any other questions from the audience or what else could be observed? Not that I see.

 

Did you show the, uh, did you show the performance dashboard? Uh, the master status dashboard. Yeah.

 

The one thing I didn’t cover in the air cloud here, um, uh, is the media asset management. So this little tab down here with the film strip, are you guys seeing this? Yes, we see it.

 

Okay. So I can upload directly from the robot to, um, storage in the air cloud. It’s literally, I go select my clips, upload that.

 

I can also upload it, you know, from a CPU as well. And this is a, we call it a skinny man. So it’s, it’s simple asset tagging so that I can find stuff again.

 

Like if I wanted to come in here and tag this with, you know, uh, mountains, um, I can go add that as a tag to that particular clip. Um, I can put that clip in a specific collection. Um, when I go in and search for mountains again, there it is.

 

I can take and, uh, create, uh, different collections of stuff filter on there. Like all my stock footage in one, all my demo footage in another one. Um, it’s a, it’s a, it’s a, a simple, um, asset manager.

 

Very cool. Very cool. Very cool.

 

Yeah. I’m just letting, uh, I just chatted to the, um, oh no, I sent it only to Fallon. Oops.

 

Let me send this to everyone. I’m just letting everyone know that, um, it, it takes about a week. We will, um, um, edit the video, transcribe it.

 

Um, if you guys are okay with it, um, you know, if there’s certain parts of your deck that you don’t want shared publicly, give, give me the presentations. I’ll, I’ll put the, I’ll put the recording. I’ll put the transcript, put the audio for podcasts and the presentation online.

 

Uh, so other people can, can enjoy it. Um, I’m putting my, I’m putting my phone number and a link to people can book a meeting with me as well. Uh, we use a Calendly scheduler.

 

So if you want to talk about a project, uh, folks, you can, you can ping, ping me. And, um, we, we already have some of the air ecosystem on our website. This recording and webinar will be in addition to that.

 

And, uh, unless you folks have any other questions, uh, we’ll let, we’ll let you get back to, back to your Wednesday. Everybody good. Well, well, thank you so much, Nick.

 

Thank you, Kevin. Thank you, Fallon, for being our producer. Thanks guys.

 

Thanks for letting us be here, Jim. We look forward to, uh, to rolling this out to your channel and seeing what, uh, what kind of amazing content they can produce with it. Yes.

 

Yes. And, and, and come and see the technology, uh, uh, in the, uh, innovation booth and the air you guys go by air for short, right? That that’s your nickname Yes.

 

Okay. Cool. Cool.

 

Cool. Well, thank you guys. Thanks, Kevin.

 

Uh, thanks Nick. And I’ll talk to you guys soon. Thank you so much.

 

This was great stuff. Uh, uh, I’m excited. Thank you, guys.

 

Bye.

 

 

 

 

 

 

 

 

Bottom of Form

Top of Form

 

 

Bottom of Form

 

 
 
 
 
 
 
 
 
 
 
 
 
 

Continue Reading