Geometry plays a big role in the world in which we live. Straight lines and the shapes they create are included in much of what we see before us. Geometric optics distortion changes the way lines appear on the screen. This is a problem facing many modern videographers as the use of fisheye and wide-angle live HD cameras become more fluent in mainstream recording.
While wide-angle cameras provide an added level of depth which standard lenses don’t capture, it does pose some problems in terms of rectilinear projection. The lines which were once straight begin to curve and distort around the edges of the shot. This optical aberration makes it difficult for viewers to determine what they’re looking at in the moment.
For videographers with the time to correct these issues during the editing process, it’s not a problem. However, live broadcasters recording sports games, local news, research for the medical industry, or even corporate AV, are unable to rectify the distortion before the image reaches viewers. This is where VidOvation’s AlphaEye becomes an asset.
The AlphaEye Lens Distortion Corrector
Using a hard-based geometry engine with subpixel capacity, the AlphaLens corrects real-time wide-angle lens distortion. The hardware corrects output in such high detail that no artifacts are visible to your audience. Whether you’re capturing a sporting event or working in the government sector, you can rest assured that every digital image is delivered in all its HD glory. The AlphaEye is beneficial for a variety of videography work, including industries such as:
- Wildlife filmmaking
- Broadcast television
- Reality television
- Medical video for non-broadcast purposes
- Online video gameplay
- Security footage
More companies are turning to AlphaEye for live feeds in conferences and remote video meetings as well.
Using Your VidOvation AlphaEye for Real-Time Corrections
The AlphaEye sounds high-tech, but setup couldn’t be easier. This unit works from a distance, allowing you the freedom to film in high definition without the extra baggage. Using a standard 3G coaxial cable or fibre connections, you can maximize distance from the AlphaEye and retain smooth transition from curvilinear distortion to geometrically sound images.
To use the AlphaEye, the device must be paired with a laptop, desktop computer, or compatible Microsoft surface tablet one time. The connection is made with a USB so correction parameters are configured for distorted images. Once saved, the computer component can be disconnected and the AlphaEye will resolve image imperfections as they occur.
Why the AlphaEye is Best for Live Footage
The video production industry is evolving at a fast pace. New devices and software are constantly appearing on the market. While many of these solutions are perfect for videographers with time to edit, they don’t help in a live feed situation.
With parameters including a strength to separate X&Y or X&Y Ganged correction in standard mode, zoom, and more, AlphaEye guarantees the highest level of correction in any given moment. It may also be used with moving cameras to capture intense action on the screen. For moving shots, AlphaEye works best for distortion correction and zoom.
AlphaEye operates with 1 video frame of latency for seamless transitions from shot to shot. This removes the concern that auto-correction will impede video transmission by delaying the feed or creating a skipping or lagging reaction. To the viewer, this 1 frame is seemingly unnoticeable, never taking away from the overall viewing experience.
The AlphaEye required no editing room, downtime, or interruption. This allows your video to stream at regular speeds, smoothly, and in HD as the action occurs. With Genlock input, AlphaEye diminishes the requirement of any additional synchronization. In a live stadium event, for example, the device provides instantaneous feedback to big screens and other onsite monitors following the concert or game.
Choosing the AlphaEye for Your HD Live Cameras
Having worked in the video technology industry for so long, VidOvation understands the growing need for advanced video editing services. This is especially true for live feeds, where few resources are offered.
For small productions and business use, budget plays a big role in the type of video equipment purchased. AlphaEye creates an affordable solution to visual limitations caused by fisheye and wide lens cameras. Offering support for resolutions of 1080i, 1080P and 720P, there are a wide variety of applications available for this tool.
To get the best live broadcast possible, see how VidOvation can help you produce high-quality HD without the image distortion with AlphaEye.
Jim Jachetta (00:00):
Good morning everyone. I’m Jim Jachetta, CTO and co founder of VidOvation. Today we have a very special guest, my good friend MC Patel. He is the CEO of Alpha Image, and today he’s going to talk about a very innovative product he has called the Alpha Eye. Now all of us people in the television industry and the video industry, we’ve all seen distortion or artifacts in video with fisheye lens, wide angle lenses.
Jim Jachetta (00:49):
So I’ll give you an example. It’s not uncommon to have a wide angle shot above a basketball basket on the back board. And the artifact is the lines on the court are curved and distorted, especially things on the edges of the image are expanded, things in the middle are crushed a little bit. So we get some really cool shots, shots we couldn’t normally get with these small wide angle POV shots. But MC is going to tell us today how he solved some of these problems, how we can correct some of the lens type distortions in live video. So MC take it away, lay some knowledge on us, MC.
MC Patel (01:41):
Well, I’ll try. So as Jim pointed out, people are using micro cameras to get shots that they can’t ordinarily get with people. So you can mount a camera in an interesting place. Generally you have several issues, but primarily there’s a distortion because you use a wide angle camera because you don’t have room or control over how you frame the shot. So people are putting these all over the place. My game is cricket, so one of the things that we’re doing is we’re putting it in the stump of the cricket ball, and you can see the curvature there.
MC Patel (02:23):
And obviously in football, people want to see the shot from behind the goal. And so these little cameras, A, they’re cheap, and B, they give pretty good pictures, except the lenses are such that, to get that wide angle, you get barrel distortion. So we have been working with a camera manufacturer and we came up with the idea, “Why don’t we do this electronically?” And it started as a joint project, and then we took it over. And so what I’m going to talk about today is a little bit about just where you would use it, and then we’ll go to what’s inside the box, the ins and outs, how do we control it? And then we’ll show you a clip which shows the end result. So if you go to the next slide, Jim.
Jim Jachetta (03:18):
There we go.
MC Patel (03:20):
So basically. It’s a hardware based lens distortion corrector. We wanted people the picture quality high. So these things are getting very high quality anyway. So it’s 10 bits, it’s got proper interpolation. You can think of this as a specialist digital video effects device, a DVD, that 20, 30 years ago, as a junior engineer or was an engineer, I was designing those. And they used to take up like 11 or 12 [inaudible 00:03:55] of space. And as you can see, this is slightly bigger than a cigar box. So it’s real time, you put video in, and you correct the distortion, and video comes out. But because a lot of the cameras also have HDMI ins and outs and not video, we’ve actually got an HDMI input and an HDMI output too. So it’s not restricted to just the video only. Another thing that we put inside this is a de interlacer. So you may have a camera that’s providing a interlaced output or a progressive output, and you may on the output want a progressive output or an interlaced. So this has got an interlaced and we also have a scaler in there. So you, if you have a 720p camera, but your station’s 1080, we can take care of it. And vice versa. We can get 720 out if you have 1080 going in.
Jim Jachetta (05:02):
I should mention there’s an optional… So they have fiber IOs, correct? Fiber in and out?
MC Patel (05:09):
Correct. What we found is that we put the fiber in at the early stages and we made it an option, but what’s happened is the reality is most of the guys say, “Well, we’ve already cabled for that. So the camera’s there. We’ve already cabled for it, and we’ve already done the transport.” But it’s available. Right? What we found is this sits best close to the switcher. And another thing of course is if you’ve got remote cameras, they’re not gen locked. So they need to be gen locked as well. And so what we have is a gen lock input in the box. So if you go to the next slide, then I’ll walk through the actual applications and then we’ll look at the block diagram.
MC Patel (05:53):
So you can see here classic a camera angle. And what these guys are trying to do is have you see the corner in a soccer field. If a corner kick comes in, if it goes into the box, you want to trap it. And the top picture shows what the image would look like. And as you mentioned, Jim, there’s curvature everywhere and there’s distortion. And then the bottom picture is what it looks like once we’ve processed it. So the boxes look like boxes and so on.
Jim Jachetta (06:27):
And I imagine also in a sport like soccer or basketball, where the bowl is larger, the ball won’t look round, correct? You see the distortion not only in the straight lines on the field, but the ball will be compressed and look strange, right?
MC Patel (06:45):
Yeah, but more importantly, that these are shots that the various fans want to see. Now, I think in Brazil, the guys were interested in putting three cameras. So they wanted to see the corner from the left end, the corner from the right end, and they wanted to have one at the back of the goal so that if there was a penalty kick or a goal. Now, if you imagine you’re recording all these three, you could create quite an interesting experience out of it. Now at the moment, as I said, it’s just the excitement of getting the corner shot. The actual, seeing it in an undistorted manner is really worth it, but you wouldn’t go through the expense of having a camera man there, or a professional camera there. So they put on these cheaper cameras.
MC Patel (07:35):
And then as we spoke to various people, as you can see, people have interest in doing this for wildlife. When all the natural history guys go out and they’ve got cameras in birds nests and so on. And also reality TV. We had a huge interest in that, because again, they put lots and lots of little cameras, and they want to be able to capture them. So many applications for this. If we take the next slide, I’ll walk you through the ins and outs, I’ve already alluded to some of them. So here we go. What we have is, as you said, fiber in, HDVI in. We call it HDVI. It’s the same as HTMI, but there’s no rights management because these are raw feed. So electrically, it’s not… A 3G HD-SDI in, a gen lock input. And then we give you the outputs as fiber HDVI, or 3G.
MC Patel (08:38):
So there has been some interest in using this for security and CCTV, which is why the HDMI output is interesting. And also from a monitoring point of view, you can literally just put an HDMI monitor, and you look at it there. The SDI output’s always available. So these are outputs in parallel. So that’s basically the technology. And as I said, we have a progressive to interlaced convertor, or interlaced to progressive. We have a scaler, and then we have the output image. So in some ways, this product is really designed for the broadcast market in mind. Now, how do you set this up and how do you correct it? Before we go to the control side, I’ll just discuss this slightly.
MC Patel (09:32):
The idea is every camera has a certain distortion characteristic. So you could set up that distortion characteristic, remember it, and call it up per camera. This product is not really designed for you to switch cameras ahead of this unit. And that’s because the distortion information resides in the PC or the laptop and you download it into the camera and it takes some seconds to do that. And then there is some flash memory to do it, but we can’t switch in the field interval. So you can’t pre switch a camera, but you can have multiple cameras supported by a single year. [crosstalk 00:10:19]
Jim Jachetta (10:18):
You mentioned, MC, that part of the problem of using these small cameras is that they’re not gen locked when the video’s coming in. So you got the shot in the back of the basketball net, you’re not sending a reference signal to that camera. So you need a frame synchronizer anyway to bring this into your workflow. So that’s an added benefit. You feed this box the gen lock, and now these asynchronous videos are locked to your house or locked to your truck, correct?
MC Patel (10:50):
That’s correct. Yeah. We have tried to make this, what I call broadcast friendly. You don’t want to have three little boxes to accommodate this. So we provided a number of options in May. So you have the gen lock. Similarly, as you say, some of the cameras are progressive only, and we still output an overload of interlaced HD out there. So you need to have a progressive to interlace converter or et cetera. So what we’ve tried to do in here is not just put the distortion corrector, but actually said, “What is a real situation? What’s a real scenario?” And what are the gotchas that say, “I bought this box, but now I need to gen lock it. Now I need to de interlace it. Now I need to have a fiber.” So we’ve given this and it’s modular so we can offer it.
MC Patel (11:39):
If you say the fiber’s taken care of, we’re going to give you HD-SDI, we can do that. Or if you say, “My switcher has framed buffers on it so I don’t need the gen lock.” And so it’s very, very flexible in terms of that. But equally, we’ve had to design this so that it does the job with these various permutations, so there’s some comprehensive set of electronics inside there. So I was just basically saying, how do you set up this distortion and what do you do? Now, if you look at some of the technologies they offer you when you look into software that does it, they sometimes offer you a grid and say, “You can distort bits of the grid to do the correction.” And we looked at this and said, “This is not a very satisfactory way to do it.”
MC Patel (12:34):
A four by four or six by six grid and you move the points around. Life’s too short. We don’t want to do that. So we’ve approached this mathematically. So we basically analyze the mathematical characteristics of distortion. And then essentially we calculate an equation for the correction inside the computer, and then download the coefficients of that equation into our geometry engine. Now, in theory, I could program the geometry engine with a grid as well, but what we’ve done is gone a little bit extra. And that has some benefits that I’ll also discuss over and above the lens discussion. So if you just bring up the next slide.
Jim Jachetta (13:20):
There we go.
MC Patel (13:21):
So what we actually have is a number of these buttons. So starting from the left hand side, we call it the strength of the actual distortion. So that really is the amount that controls the curvature. Now, when you straighten the curved picture out, you’re going to have some missing bits because the curve into the square doesn’t always fit. So what you need to do is you need to zoom the image in or out to make sure that you don’t have artifacts of the distortion correction.
Jim Jachetta (13:55):
Does the picture bulge, or you miss… You got to crop in, right?
MC Patel (14:00):
That’s correct. Yeah. So you lose a little bit of the image when you do this. And so when you’re setting up your camera for the shot, you should normally put it a little bit further away than you normally would so that when you do the barrel distortion, it comes good. So the first one is the distortion itself, the next one is the zoom. And then we realized that, “Hey, we can plug in a mathematical equation into this.” So the assumption is when you have this distortion, that the camera is perfectly positioned in terms of X, Y, Z space,. and what we basically thought is if we got the mathematical ability, what we can actually come along and say, “If the camera needs to be tilted up or down slightly or move left, right, we can do that.”
MC Patel (14:53):
So we have a three axis a camera position corrector built into this. Now this is really interesting because we had a person who said, “I’ve got this camera with a sort of a pitch, but the center of the pitch, I couldn’t place a camera exactly dead on there. I had to put it slightly to the left or slightly to the right. Can you fix that?” And we just repositioned the camera and we can. The most irritating one actually, is these gold cameras is they’re generally put on the ground rather than on a tripod. So again, you’re looking up. And the effect is a bit like this, just getting the camera position. So I’m just moving my laptop screen through to show you the sort of thing. But what we have to do is the distortion works for any position of the camera.
MC Patel (15:48):
Because it’s curing the lens. If you have a roaming camera, you just do the distortion and you can’t do anything about the XYZ because you’re roaming and the XYZ position is changing. But if the camera is static, these XYZ positions are really, really handy. Now I haven’t shown it in this particular UI, but we do have the ability. This is a slightly older shot in a newer release to decouple this. So if you have any asymmetry in your camera, we had a person who said, “Oh, we have a red camera. And the viewfinder doesn’t give us the aspect ratio precisely. So we want to do a slight aspect ratio tuning.” So we can decouple to things so we can give you an X zoom and a Y zoom that are independent.
MC Patel (16:38):
And similarly, we can do that for all the other parameters. This now gets very complicated and dangerous, but there are a few people who want to do it. You can gain the control so that you don’t do that, or you can un gain them. So basically what we’re saying is an advanced mode. But what it then allows you to do is for a shot that you say, “I’m going to use this shot day in, day out. I really want to make it perfect,” you have the ability to do that. And what you do is you plug a USB into the AlphaEye, connect the computer to it, and then do that. And then we also have a mode where you can plug it in over ethernet. So if some of the trucks have ethernet connections and networks available, so you can drive it through the network.
MC Patel (17:32):
So in the UI, as I said, you calibrate it, save it in the unit itself, unplug the computer, and then if you para cycle it or whatever, the AlphaEye remembers all the settings. Couple of other things, it also senses the sources. So if your output configuration’s 1080i interlaced, but your sources are coming in at 720 or 1080, interlaced or progressive, it will detect that and apply the automatic correct sensing to it so that the geometry engine knows what it’s getting and what it has to output. So quite a lot of bangs for bucks inside the little box.
MC Patel (18:17):
So if there are any questions on this, please, we have people monitoring the channel and so we would happily answer them. Do you want to go to the next slide? And I’ll just… So this is just a reiteration of what the resolutions we support are, what the inputs are, what the outputs are, and so on. We have dual link support on the input. It’s single link on the output because the gen lock option came after the design was committed. So we lost [inaudible 00:18:55] in there. But probably the best way, Jim, to show what this does is just, let’s play a clip. And I can let the people enjoy the thing. And then I’ll talk over it.
MC Patel (20:05):
Yeah. And I’ve got voice. So you can see on the left hand side the pre corrected image, and on the right inside the corrected image. And the goalpost looks like a goalpost. You can see some of the image has been cropped because of when we did the correction, we had to crop it to get the proper frame in. And if we [inaudible 00:20:33] sometimes you can see the graphics because this is just recorded off YouTube and we’re just playing it into the HDMI. Now that’s the other interesting thing. You can use recorded material and put corrections through it. So if you dig something…
Jim Jachetta (20:47):
Yeah, you would do the lens distortion correction before you did the graphical overlay. That’s why in this one that’s not the case.
MC Patel (20:56):
That’s right. Yeah. But the quality is pretty good. The lines are straight, and watching that shot is a nice experience. As I said, we’ve had people in Brazil look at this. We’ve had… Actually in Malaysia, we’ve had some customers. The Malaysian football league obviously is not as rich as some of the other ones. So they wouldn’t televise this. So a small production company came up with the idea of saying, “We’ll use low cost cameras, do the whole production and literally stream the matches”. And so they were interested in this because it allowed them to get higher quality shots with their handheld cameras and so on. [crosstalk 00:21:46]
Jim Jachetta (21:45):
Some of the feedback, MC, that we’ve gotten that people really like what this box is doing. But there’ll be like, “Well, I have a $300 camera and your little box has tremendous horsepower. It costs a couple of thousand bucks.” But I think you and I, when we talked about this the other day, you really have to look at the value of this shot. I’m so spending a couple of thousand bucks on this box to fix this shot, if this is the whole game, this is the penalty kick that wins you the cup, you catch it with this camera. So I think sometimes people need to reframe their thinking. Yes, the box might cost more than your camera, but what’s the value in correcting this shot.
MC Patel (22:37):
Absolutely. Yeah. Now unfortunately I go back to my cricket team analogy. Now, the cost of the camera is low, you’re right. But the cost of cabling is 10 to 20 times higher. So in the case of the cricket game, what you’re doing is you’re literally going to the center of the page and burying cable to get it into the center of the page and having a little box that thing comes out of. But the thing is that they use different bits of the pitch through the season in a typical ground. So there’re actually half a dozen of these junctions that they have to provide. Now, the cost of that is 10 times the cost of the AlphaEye. So as you say, you have to think of this in terms of, what is the value to the production?
MC Patel (23:34):
And if the producer sitting there saying, “I get to have a great shot that I couldn’t either budget for any other way, or I couldn’t get any other way.” And that’s really where we’ve seen the people. So the Malaysia team were interested because they could get this production value looking good on low cost operators and cameras. The Brazil guys were looking at it because they’re saying, “I can put three cameras and I can get three of my shots and things.” And so they were getting their value from that point of view. And so you are right. You have to think of it as a production value rather than the cost of the camera where everything is decoupled.
Jim Jachetta (24:19):
Well, you touched on it at the top of our conversation. And I want to bring it back. That we’re talking to networks that have thousands of cameras for the weather. Sports leagues, sports networks that have dozens or hundreds of cameras at different venues. Of course, we’d love to sell them a hundred of these boxes, right MC? You’ll make them, but you’re not going to go live with all these cameras simultaneously. And you’d buy a few of these boxes, feed them on an input and output on your switcher, and you route the camera through the box when you need it.
Jim Jachetta (25:11):
And you have the presets, so you could have the visitor’s goal, the home team goal, you could have presets for both of those. And maybe you don’t catch it live. Maybe when there’s an interesting shot, you could play back the replay corrected. So there’s a number of different things you can do. You can correct it live. If something interesting happens, maybe now it’s worth the effort to now correct on the instant replay, a few seconds or 15 seconds after it’s happened. So if your budget is limited, you could get away with a few of these boxes instead of dozens of these boxes.
MC Patel (25:56):
So if you look at the football, for example, for the goal mouth there would be just one camera, as you see. And another one on the other side. Now you could have those two cameras lined up, and because the ball doesn’t go from one end to the other in one frame, you can switch as the action plays, you can sort of say, “Go to setting number two, go to setting number one.” So you could do it that way. But as you say, if you have remote feeds and things coming in, provided you’ve managed that, two or three units in a truck would more than adequately cover a game.
Jim Jachetta (26:38):
I’m not a production expert, but I’m sure with some automation and macros on production switcher, you could do that plumbing that routing between the visitor’s goal and the home goal. And if they’re the same camera with the same lens, you might even have the same presets, the same setup, right?
MC Patel (27:00):
Yeah. If you say that, “I standardized on a particular camera,” then as I said, for the distortion, you can switch any [inaudible 00:27:11] If you care about the camera position, then you have to have a… But you could have one where you can say, “This one is just going to switch through the cameras and I’ll get the benefit of no distortion. And the other one is a more precision thing because it’s dedicated to a particular camera.” Now these are all production choices in how you make them and so on. But as you said, we’d love to sell one per camera, but the reality is you can only see one picture at a time. So if you manage that well, you can get some good efficiency out of it.
Jim Jachetta (27:47):
And then the other thing I want to point out that you mentioned, you see how you got the camera on it’s tripod a little further back, like you said, because some of the top, bottom and sides would get fluted, right? Or it would look like an hourglass or it would be wide on the top and bottom and in in the middle, right? [crosstalk 00:28:14]
MC Patel (28:14):
It literally looks like an hourglass. But if you have really wide angle lenses with a fish eye, you will lose some image. And that’s a matter of planning. So people get very excited about this, and what we’re saying is, “You still have to create the short you want by carefully placing your cameras. And you don’t want to use this positioning if you can place the camera in the right place and the right location, in the first place. [crosstalk 00:28:44]
Jim Jachetta (28:44):
Right. If the camera was too close, we’d lose the goalposts. And in this type of shot, that would be bad.
MC Patel (28:51):
Yeah. But we had an experience when we went to some horse racing in England. And they were putting cameras where the horses come out of the trap. And what they had is people getting up on ladders and putting them, and then watching it and then saying, “Oh dear, that’s not in the right place.” And they looked at this and they thought, “Wow, this solves the issue because I’m doing this three or four or five times.” But as I said, I think if you use this as a part of your coverage strategy, work with the producers and plan it, there is some amazing results.
MC Patel (29:28):
We’ve shown you this as an example, and it’s got its limitations like everything does. But once you’ve thought about it and positioned everything properly and got the right utilization… And if you recall, we spoke to a couple of customers who started with those questions, and then after a week of playing around with it, they said, “Oh, right now we understand. We’d have to move the camera back a little bit. We can get more flexibility with how to do this. We’d have to do this.” So if it’s planned correctly, we think this is a tremendous box.
Jim Jachetta (30:02):
MC Patel (30:07):
Okay. I think we’ve covered the meat of the subject. So if there’s any questions, I’m happy to take them.
Jim Jachetta (30:15):
Yeah. I think what’s nice about this product is… I’ll just go through some of these examples here. What’s nice about this product is it’s pretty self explanatory, I think.
MC Patel (30:26):
Yes, it is. Yeah.
Jim Jachetta (30:32):
Please reach out to VidOvation If you want to learn more about this product. We’re the distributor for AlphaEye in the US, we work very closely with MC. This is really an amazing product. Quite a few, a weather entity, a sports network are really closely looking at this here in the US, and I think it really does have its place. It really does work really well. We have several units here in the US for evaluation. We welcome you to demo or try the product. I try to get it up inside of a week. We may have it up Friday or early next week. We transcribed the webinars.
Jim Jachetta (31:23):
We put a produced version of the video up on our website. The presentation will be there as well. So you can watch the video, download the PowerPoint. We also put the video on a video podcast, and we also have an audio podcast versions of all of our webinars. So whatever your method of learning or ingest, we should have you covered. Thank you so much, MC.
MC Patel (31:51):
It’s been a pleasure.
Jim Jachetta (31:53):
The sad part about this, folks, is I was looking forward to seeing MC at IBC. And when you go into the Alpha Image stand at IBC, at the end of every day, they have a happy hour where they give you a little Tanqueray and tonic, a little gin and tonic. It’s very English, right? A little [crosstalk 00:32:16]
MC Patel (32:16):
Jim Jachetta (32:18):
But unfortunately, we can’t do that today. So thanks everyone for joining. Reach out to us if you have any questions. And if you’d like to do a demo, there’s more about the product on our website. So thanks again, MC, and I hope to see you soon.
MC Patel (32:36):
Yeah, I hope so too. Now, fortunately for me, it’s 6:30` in the evening. So I do get my gin and tonic.
Jim Jachetta (32:43):
Very good. Very good. It’s a little early for me. It’s 10:30. So thanks MC, thanks Cindy. Thanks everyone for helping today. And everyone, please be safe out there in this new abnormal we’re living in. I hope all of you are staying safe and healthy and reach out to VidOvation if we can help you with any of your video and broadcast production needs. Thank you so much.
MC Patel (33:11):
And thank you for attending. Thank you very much.
Jim Jachetta (33:14):