Learn how you can transform your broadcasts with compact, rugged, and affordable HD/UHD ATOM cameras from Dream Chip. They are a game-changer!
During the webinar, you’ll learn:
How to identify the right mini camera for your application
How to build an unlimited number of mini cameras into a unified camera control system (Sony or CyanView)
How the feature set and capabilities of Dream Chip ATOM cameras enable seamless integration with other camera feeds
How leading broadcasters, media companies, and sports leagues have used various ATOM models to get creative with camera placement
Jim Jachetta (00:00:01):
Okay. I think we’ll get started. Good morning everyone. I’m Jim Jachetta, co-founder and CTO of Innovation. And today on our Wednesday webinar, we have a very special guest, Stephane Dubocu, Director of Global Sales from Dream Chip Cameras. Welcome.
Stephane Dubocu (00:00:22):
Yeah. Thank you, Jim, to give me the opportunity to show our stuff. So, excellence. So should I start now? Yeah, it’s all fine?
Jim Jachetta (00:00:34):
Stephane Dubocu (00:00:35):
Okay. So let’s start with that. So good morning, everyone, in U.S. So, I’m Stephane, I’m based in Belgium. So I speak French and English, but no German, even if my company is German. So I will explain you a little bit about Dream Chip, of course. I have really nice toys here. So I will be able ready to show you some nice pictures and some demos. So stay tuned because I will really get into details about… I mean, how the cameras work. What’s the difference with the competition, and some example of slow-motion, because we do a very nice slow-motion camera as well. I will also end with some use case.
Stephane Dubocu (00:01:23):
One is really nice so I will spend some time to explain you the background of it, Jim. So let me maybe start with the presentation and yeah, so that’s it. So sorry for the quality of the video, it seems that it’s a bit, yeah, that super good picture quality, but here it’s working fine, but I will try to really be slow on this. And Jim, just let me know if I need to put some more light on some information. So as I’ve told you, I will start with the introduction. So speaking a little bit about Dream Chip, who we are and what we do. Then I will jump into the ATOM cameras, the very small cameras that we have. This kind of really mini cameras like POV cameras, but they are a bit special.
Stephane Dubocu (00:02:27):
So I will explain you a why, then slow- motion, and then some words about Barracuda. Which is a streaming platform, H.265 SLT platform, which I think has really nice features. Let me maybe start with a very interesting white paper from InFrance, which is huge. Of course, a light order company. With this COVID times and a remote production stuff and everything, this empty stadiums. So the thing is that they were writing that they try to get more camera angles, new perspectives. And I really believe that’s what we do at Dream Chip. So I was really happy to read that from them. And that’s really what we do at Dream Chip. So it’s ready to bring you new tools, new angles, new cameras, really small cameras, but broadcast quality. I will show you that just after. But I think it’s really what we do. So start with Dream Chip, Jim.
Stephane Dubocu (00:03:40):
Dream Chip is kind of old company actually because some of my colleagues knows each other for 30 years. So three, zero. So it’s not a new company, but we decided really late to do our own products, with our name on it. So, before that, we were doing a lot of OEM products for a lot of manufacturers. Not only in broadcast, because our main focus is really FPGA development, sensors, video processing. That’s really our DNA. And so when it comes to Dream Chip, we have around 125 Engineers. So it’s not a small company. It’s not huge. But it’s 125 guys. So it’s really technical and it’s really deep, deep engineering that we do at Dream Chip. So we are based in Germany, Hanover, if you know it. So we have three sites in Germany.
Stephane Dubocu (00:04:41):
And we bought, last year, a company in Holland. xNXT, so Phillips and then Samsung. So we got 25 more people last year. So as I was saying, Jim, we don’t do just cameras. That’s one of the business unit we do. We are really focusing on mobile phone, automotive products. So just to give you an example, I don’t know if it’s really popular in U.S., but the new Audi, the new electrical car from Audi, you don’t have the side mirrors anymore. That’s cameras. And that’s all technology. So we are supplying Audi with the side mirrors. Huge business, as you can imagine. It’s really about picture quality. There’s a lot of constraints around that so that’s really our main focus. But my CEO decided we need to make cameras and put our name on something and do something different that what is existing on the market, of course.
Stephane Dubocu (00:05:49):
We didn’t want to compete against our customers, which makes sense. And so we decided to do this. So we have two lines of products, cameras, and IP streaming devices. So I will start with the cameras because we have really, as I was saying, really nice toys, at Dream Chip. So I will explain you what we do. So basically we have eight different cameras from HD to 4K. And I think that we are the smallest on the market. So we have really tiny camera. They are all 1080p HDR. So you can have HLG, PQ, S lock free on every single model that we have. So it really goes from the very small ones. So I will show you some specifications. You’ll see it’s really crazy small. With the ATOM one mini the MediaArea is really,really small.
Stephane Dubocu (00:06:52):
If you remember Jim, two years ago, Reader was doing this Red Bull Air Race with the planes flying around.
Jim Jachetta (00:07:03):
Stephane Dubocu (00:07:04):
So this ATOM one mini air was specifically developed for them. So they really needed to very small camera, maximum picture quality, SGI, of course all our cameras are SGI with full control. So I will show you also how we control the cameras and what are the kind of broadcast control we can get on the camera. If you’re not familiar with cameras, I will try to explain you in details, of course. But if you know a little bit about that, you’ll see it’s really a full feature broadcast camera. So we have all the features that we need, which is really important and I will explain you just afterwards, why it’s important. So let me maybe jump into the details. So, of course we are all over the world, especially in U.S. We have our cameras already on big events from MLB to golf, with the U.S. Open. I will show you some pictures, but we were at the Super Bowl this year. So we have really nice references.
Stephane Dubocu (00:08:11):
This is what I was trying to explain you with the picture quality. So the most important when it comes to a broadcast production, is that all the cameras looks the same. Which is a challenge. Because all the sensors, they look different. So they see the colors differently, you see. So let me give you an example Jim. If you have a football match, what is important is that the green of the grass is the same on all the cameras. So when the Director is switching to one or the other, the green should be the same. The blue of the sky, the colors of the jersey should be also the same, because they don’t like that the picture looks different because then you see that you are using a different camera.
Jim Jachetta (00:08:57):
Stephane Dubocu (00:08:57):
And that was really one of the weak points of the actual POV on the market. I mean, as we say, you pay for what you get. They were really cheap, but then you get for the money you pay. So here we are really bringing a value on the camera. Don’t get me wrong, Jim, the pricing is really affordable. When it comes to slow-motion, we have really the cheapest slow-motion in the industry, but really cheap. But quality is not cheap, you will see. So here, it’s really important to have a tool that helps the Vision Engineers to paint, to shade the camera, to the main camera. And that’s what we call Multi Matrix. So if you’re not familiar with Multi Matrix, so you see here, a vector scope and the vector scope is just showing vectors of all the colors. So we can have like 12 vectors, 24 vectors, 48. So it’s really, really precise.
Stephane Dubocu (00:10:10):
Most of the Engineers in the field, they are happy with 12 because it takes time to really match everything, and with 12 vectors, you can cover really all the range of colors. So let me show you maybe how it works and let me maybe explain what is my setup here. So I have different cameras. So let me maybe jump to this view. I got different cameras here, including the slow-motion, the ATOM one, the ATOM one mini, the 4K. I got some RCP to control the camera. So I will be able to show you that also, and how we paint the camera. I’ve got some PT depth controllers for the Pan Tilt head. I will also explain you that, but let me maybe show you this first one. So let me just disappear for a few seconds. So power points, demo camera.
Stephane Dubocu (00:11:12):
So this one. So let me disappear for a while because I would like to show you the bottom right. Or maybe I can zoom into it. That’s going to be more visible because maybe picture quality will be better. So let me maybe jump into the full screen. And so you see here, my color chart and a green Mercedes at the back and my turntable really at the far back for my slow-motion. What I want to show you here is the Multi Matrix, because it’s very important tool that we have, which is unique. So Multi Matrix, usually, are used in the very expensive CCU from Sony, Grassvalley at really different level of pricing. So I’m speaking Euro, but that camera I’m showing you at the moment is 114,900 so €1,149. So around $1,500 something like that. And you have Multi Matrix on those ones.
Stephane Dubocu (00:12:21):
So how does it work? So here I will play with the RCP. I will go on the right camera and I will enter my menu to get my Multi Matrix. So I will come back to the Camera View. So this is what we call a Gate Effect. So the Gate Effect is really cool because it will highlight the color you want to change. And what is important, when you do that, and when you change that color, you change only that color. So that means that you are not moving the other colors. So you’re just moving the red, whatever color you are selecting on the vectors. So as I’ve told you, Jim, we have like 12 vectors, but that can be more than that. So let me go maybe to the green of the car. So this is the green of the car, and let me just change the color.
Stephane Dubocu (00:13:14):
So I will put more saturation, like this, a little bit of view. So you’ve seen how that the quality is good enough that you’ve seen the color changing. Then I will just put back the other colors. So just the green has changed. The other colors are the same. So if I take out the Multi Matrix like this, just go back to normal. And if I activate them, then just the green is changing and I can do that individually for every color. So you can calibrate my camera and really match it to almost everything.
Jim Jachetta (00:13:53):
So like you said, the soccer, the football player’s jersey, if it’s an odd color, you can target that one color out [crosstalk 00:14:02].
Stephane Dubocu (00:14:03):
Exactly. And that’s really important, again, because they should all look the same. And so with those Multi Matrix and combined with the RCP I’m using, I will come to the RCP just after. I mean, you can make presets, you can make group of presets. Everything is saved in the RCP, which is also very interesting because you can save it in my cameras, of course, all the presets. But if you have a failure in the field, then you can just change the camera and your RCP is pushing back the right values to the camera. So it’s really quick and efficient if you need to change something in the field for whatever reasons. So, that’s really easy. So [crosstalk 00:14:49].
Jim Jachetta (00:14:50):
So Stephane, this is very unique in a low-cost, $1,500 camera-
Stephane Dubocu (00:14:57):
Jim Jachetta (00:14:57):
… to have the kind of broadcast hooks, controls, color balance. There’s nothing else out there like that. Right?
Stephane Dubocu (00:15:06):
Absolutely Jim. So there’s no one that does that. I mean, some of them have some basic correction. Like a very famous brand in U.S., that I will not name here, but not everybody knows those POVs. They don’t have those kinds of tools. So, I mean, again, that’s decent cameras for the price you pay. It maybe good enough for some people, but if you want to get to the next level, and really get something at some good pricing, then we do something unique that nobody is doing, indeed.
Jim Jachetta (00:15:42):
Stephane Dubocu (00:15:42):
And as I was telling you, so we really have almost everything. So let me maybe jump to the RCP. So here, Jim, I’m using a CyanView RCP. So let me maybe show it like that.
Jim Jachetta (00:15:57):
I’ve heard the name before.
Stephane Dubocu (00:16:00):
I think so.
Jim Jachetta (00:16:01):
We have worked with it a few months now. You used to work at CyanView, correct? Earlier on?
Stephane Dubocu (00:16:06):
Yeah. So, David, which is the founder, is a good friend of me. And we founded the company together a few years ago. And previously, we were working together for slow-motion company called, I-MOVIX, which was quite popular in U.S., so Fletcher was one of our biggest customer and they were really using a lot of I-MOVIX system. And then we launched the design view and went for the story. So let’s be together. We are the two of us, right?
Jim Jachetta (00:16:38):
Stephane Dubocu (00:16:39):
So let me explain you a story. So when we launched CyanView, we were looking for mini cameras. And we check a lot of cameras. And the best we found was Dream Chip. And so we base our development with Dream Chip camera, and then for some personal reason, we decided to split with David, and then it was a natural move to go to Dream Chip for me at the end. Because I was already selling a lot of cameras for them. So it was really natural to go there, but also interesting because I was knowing already really well the product, because that’s the one we selected before. So that’s the link and yeah. Some story between us.
Jim Jachetta (00:17:20):
Yeah, yeah. So it’s good though, that we’re all kind of in the same extended family, which is good.
Stephane Dubocu (00:17:26):
That’s exactly the idea. So here you have some arrows at the really bottom of the RCP. So you see maybe on the screen, let me put it like this, you see the name changing. So with one RCP, you can control as many cameras as you want. So it’s really no limitation. And if you would have all the cameras, like PTG camera from Sony or a Panasonic at the same time, you can control different protocol at the same time. And see how quick it’s to go from one camera to the other, and getting the feedback from the camera. So you see really the value changing. So with one RCP, you can control as many cameras as you want. I have six cameras now. I’m switching from them and I can switch my video matrix at the same time.
Stephane Dubocu (00:18:17):
So if I go to my quad, actually, that’s the RCP that is switching at the same time. So when I’m switching the cameras, it’s also talking to different products around. So it’s more than just an RCP actually. But my point was really to show you what we can control on the cameras. So if I go to paint one, so I have a first menu here called paint 1. I have the gamma correction that I can play. I have the black gamma that I can also change. I can change my saturation. I can play with my details. I can turn them on and off. If I put too much details, then you can get some noise in the back.
Jim Jachetta (00:19:02):
Stephane Dubocu (00:19:03):
So we have a Denoise filter. So you can increase the detail, but also decreasing the noise because they go together. So we have that filter. Then if I go to paint 2, I kept the normal matrix so the red to green, the red to blue. So the normal matrix is, that usually the people in the field knows how it works. But it’s really interesting when it comes to the Multi Matrix. So maybe I can show you that I don’t want to make advertisement, but maybe I can show you that again, with a can, how easy it is to change it.
Jim Jachetta (00:19:43):
That’s a suspicious red can, could that be Coca Cola by any chance? [crosstalk 00:19:47]
Stephane Dubocu (00:19:49):
I’m not paid by them, so I cannot name them. So look how easy it is. So I will change the red, of course, as you certainly understood. So I can enable my Gate Effect again. So you see that I’m on the green, because the green of the camera is on, I am on the vector nine. So let me come back to the number one or two. One is the red. Then I can increase the saturation. I can increase the view, and this is without, so you see off, this is without my changes, and this is with my changes. So you can see that the red is no more purple or pinkish, maybe a little bit.
Jim Jachetta (00:20:33):
That’s correct. I guess the important thing is, because the way your camera is built, you can isolate a single vector without impeding any adjacent colors, any adjacent vectors.
Stephane Dubocu (00:20:49):
Absolutely. So that’s what we call secondary correction. So most of the color corrector are just playing on the primary corrections-
Jim Jachetta (00:20:58):
Stephane Dubocu (00:20:59):
… which is just pure z RGB correction.
Jim Jachetta (00:21:03):
That’s very true.
Stephane Dubocu (00:21:04):
Yeah. That’s really the difference that we have with the others.
Jim Jachetta (00:21:09):
You said it earlier, what’s the maximum number of vectors you can control?
Stephane Dubocu (00:21:14):
So the maximum we can do is 48. So it’s really a lot because then you have really a lot of a small nuance between the colors. But usually the people in the field, they like 12.
Jim Jachetta (00:21:29):
Because that’s pretty enough?
Stephane Dubocu (00:21:31):
Yeah. We can do more. So if we have really peaky Engineers, we can really go with more vectors. But honestly, in the field, they are really satisfied with 12.
Jim Jachetta (00:21:43):
Stephane Dubocu (00:21:43):
So we can do more, again but 12 is really good enough.
Jim Jachetta (00:21:48):
Stephane Dubocu (00:21:49):
We have the KNEE function, like on the Sony camera, for instance, that’s also unique, nobody in the POV has that kind of features. And then we have the white clip also. So we can also play with those tools.
Jim Jachetta (00:22:05):
What does the KNEE effect do, again? I don’t recall. Explain that.
Stephane Dubocu (00:22:09):
That’s a good question. So unfortunately, I cannot show you here because I don’t have the setup that can explain it to you. So KNEE is really to change the gamma curve at some points. That’s the KNEE point, because if at some point you have really a big contrast with a window and it’s… I’m in Belgium, so it’s already the night, so I cannot show the window because it’s just-
Jim Jachetta (00:22:32):
Stephane Dubocu (00:22:32):
… not a black eye, but the idea is really just to decrease the eye lights, for instance, so that you are not burned in the eye frequencies.
Jim Jachetta (00:22:44):
Like a window would blow the image out. So you contain that.
Stephane Dubocu (00:22:48):
Jim Jachetta (00:22:49):
Stephane Dubocu (00:22:51):
Yeah. That said, we have also some Auto Tracking Light, Auto-White Balance and things like that. And we can also weight the picture. So saying that, that side of the picture is more important than the right side. So not affected by big windows that would really compute, make the compute of the camera in the wrong way.
Jim Jachetta (00:23:15):
Stephane Dubocu (00:23:16):
So we can also add some weight if we want, but usually again, people are using KNEE because it’s really powerful. I can just screw up my White Balance. So let me put it like this, red and blue maximum. So it’s really ugly. I have just a lot of White Balance. Would be better to have a white paper and do it with a white…
Jim Jachetta (00:23:41):
Stephane Dubocu (00:23:42):
But that’s good enough for the demo, I guess. So it’s interesting to show you that, because that means also that you can play with the white gains individually. So you can play with the white and the red gains. You can do the same with the black better style. So you have a full control on the black levels, as well. We have the master of black. So you see that my pictures is really becoming grayish, but I can of course decrease it. So really it’s full control, like a broadcast camera in those are really small cameras.
Stephane Dubocu (00:24:22):
We have also here some modes. So if you see that here, I will show you that later in the slides, but we have also lens motors. So if you want to motorize also the iris, to have a control on the iris, I will show you some pictures, but my point is that here, as you can see, I can really play between the shutter and the gain.
Stephane Dubocu (00:24:46):
And the first line would have been my iris control. So I could control the iris from it. Then I can play with the gain and then the shutter. And I can make a combination of those three settings. Again, like a broadcast camera. So it’s really complete in terms of control. Totally. So we have a tele button, but again, it’s for CyanView because you know that they have what they call a CI0. So this camera interface, they have totally on it information so they can link my camera with the Tuli. I forgot to mention something important, Jim. The control of my camera is RS-485 or possibly four, two, two on some cameras. I know that in U.S. you like four, two, two. More reliable, of course. Not all of my cameras do four, two, two, but some of them, the ATOM one and the 4K does it.
Stephane Dubocu (00:25:51):
So the one you see now is doing four, two, two. It’s serial communication. And serial communication is always complicated in the fields because of the length of the cable-
Jim Jachetta (00:26:02):
Stephane Dubocu (00:26:02):
… the distance, then you can of course bring some fiber solution. And I know that you have nice options for that, but CyanView is IP. So that means that we simplify all the cabling-
Jim Jachetta (00:26:14):
Stephane Dubocu (00:26:14):
… because everything is, you see, just an RJ45 here. And then I go IP RJ45 until my camera. And this small box, if you put some PoE power on the RJ45, at the end of the day you just need to bring an SDI and the RJ45, and you power the camera. You control the camera and you have the image back to the mobile unit.
Jim Jachetta (00:26:40):
So you use the CyanView CIO or RIO-
Stephane Dubocu (00:26:42):
Jim Jachetta (00:26:44):
… to convert from serial to IP.
Stephane Dubocu (00:26:47):
The CI0. There’s a new Neo that they have also.
Jim Jachetta (00:26:51):
Stephane Dubocu (00:26:51):
And we use that.
Jim Jachetta (00:26:51):
Stephane Dubocu (00:26:51):
Jim Jachetta (00:26:51):
Right, right. We have another webinar that goes into that more depth…
Stephane Dubocu (00:26:57):
I was showing CyanView, but the idea was more to show my cameras and what they can do in terms of controls. So again, really easy to switch from one camera to the others. I think it was important to show you what the camera can do. So here I will show you the pictures, but I have also my pan tilt controllers that goes through the CyanView again. So, it’s really convenient to have a tiny, tiny, tiny remote pan tilt heads with a mini camera on it. You’ll see it’s really amazing. So Jim, I think it was important to show the features and to show the Multi Matrix, which really make a difference with the competitors. And that’s where we have our value. It’s really about picture quality and really video processing. We have really a huge value for that. Which is what matters at the end in broadcast of course.
Jim Jachetta (00:27:57):
Right. Fine. The cameras have to match absolutely.
Stephane Dubocu (00:28:01):
So when it comes to controls, of course…
Stephane Dubocu (00:28:03):
So when it comes to control, I did a small speech about CyanView, but for some customers that would already own Sony RCPs, for instance. We have also a kind of mini converter that translates the 700 protocol from Sony to my serial protocol from Dream Chip, and you have the feedback of the value so might be interesting. Of course then, a lot of limitation, we don’t support all the features because some are really dedicated to Sony and the point to point connection in that case. So again, with CyanView, you can control the multiple cameras. And when it comes to mini cameras, Jim, that’s the idea, because of the price and the quality, they are used to add multiple angles, so it’s not just one camera in the field. And then there’s another RCP called, Skaarhoj that is doing also some stuff. That’s the situation.
Stephane Dubocu (00:29:02):
Maybe some details about the camera, Jim. So I will not make a lecture of all my slides of course because, I know that everybody’s bored with long presentations. But I just want to highlight some of the bigger ones-
Jim Jachetta (00:29:19):
Give us an overview.
Stephane Dubocu (00:29:21):
Yeah, exactly. So I really want to show you and some highlights on the camera. So the ATOM one mini is among our really small, small cameras. I’m speaking millimeters… I’m sorry, Jim. So I tried to translate it in inches, as you can see on my third line. I hope that I’ve translated that correctly, but then you have an idea of the dimension.
Jim Jachetta (00:29:44):
Yeah, so a little over an inch, it looks like. 1.1 inch.
Stephane Dubocu (00:29:47):
Exactly. Yeah, because the inch is 2.5 centimeters and we are at three centimeters. So it’s a little bit more than one inch. So imagine that on one inch, by one inch, by one inch, you have an SDI camera with SDI connectors and FPGA inside, because the CCU actually is in the head of the camera. So in that small camera, we have everything processed inside the camera, which is also an important point, Jim. Because most of the time, people are asking me, “Okay, how fast is your camera? What is the latency of the camera?” Because I have for instance, an application it’s a concert or with… How do you say that in English? The main guy that is leading the orchestra?
Jim Jachetta (00:30:35):
Stephane Dubocu (00:30:38):
Conductor, [chef d’orchestre 00:30:38] in French. Conductor. And sometimes there’s so many people that they don’t see the conductor all the time and they need to have a local monitor to see exactly how he’s handling the rhythm. And I can tell you, you cannot afford to get latency because you have some-
Jim Jachetta (00:31:00):
Yeah, they’ll miss their cue.
Stephane Dubocu (00:31:03):
Exactly. And then it’s on the wrong moments. So, we have almost no latency because, everything is processed in the FPGA that is really inside the camera. So it’s really a question of one or two lines on 1080 lines of delay. So it’s really nothing. I mean, it’s really, really micro, micro, micro, micro seconds of delay. Let’s say there’s no delay. So one of the application of the camera is that application because of the no latency issue that we have on the camera.
Jim Jachetta (00:31:39):
Well, I see here too, at the bottom there, you have this supports the multi matrix. Now a camera this small from another manufacturer, you’d have to have an external color corrector, right?
Stephane Dubocu (00:31:53):
Yeah. And I don’t know many that are really having multi matrix inside. So most of them are… I will not say names, but everybody knows the color corrector that you can find on the market, except one that I know`, most of them have no multi matrix support.
Jim Jachetta (00:32:10):
There’s RGB crude correction?
Stephane Dubocu (00:32:15):
Exactly. So you have some gamma correction, you have the primary correction, so RGB, as you said, exactly. But when it comes really to the matrix, then you don’t have that. And then that’s also some cost, right?
Jim Jachetta (00:32:29):
Stephane Dubocu (00:32:29):
Because that also quite expensive, so we don’t need that. That’s really important. In that camera because of the size, of course we are using a small sensor, which is a rolling shutter. So I guess that you know the difference between rolling and global shutter. So rolling-
Jim Jachetta (00:32:48):
No. Explain that to us that are not familiar with that. What is rolling?
Stephane Dubocu (00:32:51):
It’s quite easy. So the shutter, when you’ve got a rolling shutter, so it reads line by line. And then the global shutter, it reads at once the full sensor. So the difference is when you have a racing car, for instance, with a lot of bumps and rolling and then you can really have some blurs on the pictures and it’s not totally smooth when it comes to rolling. And when you have a global, then it’s really straightforward, it’s not moving at all.
Jim Jachetta (00:33:25):
It’s capturing the whole frame at a time instead of a line.
Stephane Dubocu (00:33:28):
Jim Jachetta (00:33:28):
Stephane Dubocu (00:33:30):
That said, we have kind of fast rolling shutter. So we don’t really have this rolling effect problem. My cameras are used at NASCAR, 24 Hours of Le Mans. I will show you a video of Daytona 500 that was taking place two or three weeks ago in U.S. And on the drone, it’s super cool application of the mini cameras, shaded live by CyanView for folks. And I will show you the video just later and you will see no effects like that. So at the end of the day, I think that we have a very good rolling shutter, but global will be of course, a stepfather. And we do have of course, global shutter camera. So I will come to that in the next one.
Jim Jachetta (00:34:22):
I should say I’m not doing my job, Stephane, couple of questions.
Stephane Dubocu (00:34:28):
Jim Jachetta (00:34:28):
So one question, can the cameras also work with Rec 2020 color gamut? Do you understand that question?
Stephane Dubocu (00:34:36):
Yeah, of course. A very important question and yes, of course, because we are HDR, so we have the Rec. 709 gamut, but we have also the 2020, very important question. And I was not doing my job properly as well, Jim, so it’s one to one. It’s perfect. But yes of course.
Jim Jachetta (00:34:55):
Then another question, what video formats can we record to? I assume, it’s a 1080p 60, it’s SDI. So any kind of an SDI recorder, it’s uncompressed video.
Stephane Dubocu (00:35:10):
It’s uncompressed SDI and we cover all the frame rates. So also including the cinema frame rates. So really going from a 23.99 to a 59.99 and all the intermediate frame rates from Europe, Asia, U.S. So we cover all of them. So all my cameras are 1080p HDR, but they can be interlaced. I can also put interlace pictures, of course, than SDI 50, 59 from the same camera. I will show you the next one, because it’s interesting regarding the frame rate. I will explain to you that [inaudible 00:35:47].
Jim Jachetta (00:35:50):
Stephane Dubocu (00:35:50):
For the question at the moment?
Jim Jachetta (00:35:52):
Yeah keep talking.
Stephane Dubocu (00:35:55):
So for those mini cameras, they are so small that usually we are using M12 S-Mount Lenses. And so, usually those lenses are quite bad because that’s really basic CCTV kind of lenses. But the one that we are providing, we did really extensive research on the market and that one is really sharp. So again, as I was showing before, that’s that lens that I’m using here. Trust me, the picture quality is really good and really sharp.
Jim Jachetta (00:36:36):
The go-to webinar quality may not be good. I’m going to make a blog post and have this video recording. I will also put some of the higher quality video links in the blog post. So if you want things might be a little… The frame rate might be a little low. The color might be a little off.
Stephane Dubocu (00:36:57):
Because please believe me, it’s really good here. So come in Belgium to see that. So M12 and we are also providing this really, really sharp and very nice picture quality lens, which is 85 degrees. So it’s really wide, almost 90 degrees wide, 3.4 millimeter it’s included in the price. So the price I’m saying is always including the lenses for the mini. When it comes to C-Mount, which will be our next range that I will show just after, then you have the choice to buy the lens you want. So we are delivering a bear camera and you put the C-Mount you want.
Stephane Dubocu (00:37:36):
So this M12 is 85. You can of course find all the lenses on the market, so feel free to change the angle if you want. But that one’s really good and it’s provided with the camera. So interesting. We have a microphone also on that mini camera. Of course, don’t get me wrong it’s not a Sennheiser quality level microphone. But that’s the kind of microphone we put on the mobile phone and you know that mobile phones nowadays are really good audio quality as well. So it’s really an SMD flats components. So you can really get some good ambient noises or the crowd in the stadium or whatever-
Jim Jachetta (00:38:23):
It could be back up to your Sennheiser mic and I-
Stephane Dubocu (00:38:27):
Absolutely. My customers are using it, so it works. So it’s embedded on the SDI and it bring some sound, of course. So weight 45 grams again, speaking grams and kilos. But it’s really light. That’s my point. If that one is too big, we have a smaller one. So we have the ATOM one mini AIR, which is three by three by 1.8 centimeters. So it’s less than one inch. So that one is super small. That one is so small that you don’t have SDI connectors on the camera. There’s no place for it. So we have two breakout cable, which facilitate also installation. If you want to get that really in some places, and you will see that in my application, the kind of places we can fill. So same sensor, same camera, same control, it’s just that the size is a bit different, for the rest, it’s exactly the same camera.
Stephane Dubocu (00:39:36):
Then we have the Waterproof. So that’s the one I was showing you here at the beginning so a very small one. Again, the same camera, same PCB, same sensor, it’s just that you have a waterproof housing around. It’s really waterproof. So really I can put it in the water for swimming pool application, or whatever thing, of course the idea is to put it outside. So it’s weather resistant, but it’s more than whether, it’s waterproof. So you can really put that into the water. Same lens, same everything, it’s just three cameras that are just a small difference because of the enclosure, because of the external housing, but same product at the end. So same picture quality.
Stephane Dubocu (00:40:28):
Then I come to my best camera. I like that one because it’s really, really good. So that’s my ATOM one, so that’s this small POV. So you see it’s really in my hand. You have a global shutter, you have a two third inch global shutter. So that’s simply the smallest global shutter of the industry. You have nothing smaller than that camera with a global shutter. So picture quality sensitivity because of the size of the sensor is also really good because the previous one was having a smaller sensor. So no miracle, of course, the sensitivity of the photo sites with small photo sites, it does the job, but of course, if you have a bigger sensor, always better. So global shutter you’ll see on the picture, but also on the camera that I have two SDI out and one synced. So I have a genlock on that one, so that one can be genlock like black burst, tri-level, a normal broadcast camera. And the size is still really small, as you can see. So three by three by six or one inch something by one inch something to 2.5 inch, so still really, really, really small. And that one, because two SDI output, you can have two independent SDI. So for whatever reason, you could have one in 1080p and one in 1080i at the same time. So you can make a production in interlaced and recording in progressive or vice versa, so it’s really up to you. We can ingest also on the other cameras, but especially for that one lookup table. So, we can add, look up table 3D Lutz on the ATOM one mini, but that one also. But then I can have an SDI with the normal SDI output and compressed. And then the SDI too, we have a look, so you can add the look on the SDI too.
Stephane Dubocu (00:42:27):
So very nice camera and really sensitive. So very interesting. Then, because UHG is coming, of course, we have 4k cameras, we have three models, the 4K mini seven, 11, and 16. Those are the same size, so it’s the same body, it’s just that the 4k sensor, one is small, one is medium, and one is large. It’s like at the McDonald, so you can get different menus. So the 4-
Jim Jachetta (00:43:00):
Stephane Dubocu (00:43:03):
You can size it so you can upgrade it I think. So 4K mini 7 is a small sensor, very good camera, affordable broadcast quality a 4k. So very interesting for the people that wants to step in the 4k world, because again, a Dream Chip quality with S-Mount lenses. So that’s the same S-Mount that we put on the ATOM one. Great camera, really, honestly, a very nice one. Then the mini 11 is really interesting because it’s a two third inch camera.
Stephane Dubocu (00:43:43):
And because it’s a two third inch camera, you can add B4 ENG Lens adapter. So you can have a nice ENG camera with a mini camera at the end, and you shoot 4k with a broadcast quality lens. So that’s really impressive. So that one is really good for that kind of specific application. That one like the ATOM one is C-Mount. So we are not into the S-Mount anymore, and C-Mount is really… More and more like core lenses, you get really nice lenses with very good picture quality for C- Mounts nowadays. Maybe let me just say something, Jim, because I’ve also always the question, we do POV, so point of view, wide angle. So you put that behind a basket ball, on the game ball to get the action, you need to be close to the action so you need to see really wide.
Stephane Dubocu (00:44:51):
I’ve often the demand to get, okay, what about the zoom? When it comes to C-Mounts, the trade-off has been made in a way that C-Mounts, you don’t really find good zoom lenses at this stage, because again, we were more into the wide angle stuff. That said, computer, which is also a very nice brands, is launching a zoom lens for two third inch in C-Mounts. So zoom lenses is coming. We are testing them with the manufacturers, but that’s coming. So that’s why the mini 11, so the previous one is interesting because if you really need a zoom, most of our customers, they own already ENG lenses, so they just take it off the shelf, they put it on the camera, they want the zoom there’s a solution. Because we are really in the white angle area.
Stephane Dubocu (00:45:48):
But again, don’t get me wrong, it’s coming. So the mini 16, it’s the last camera we have with fixed frame rates because of the slow motion just after. It’s a one inch sensor. So it’s a huge sensor, global shutter. So again like the ATOM one, that’s the smallest global shutter 4k of the industry. There’s nothing smaller than that and again, just that it’s amazing. So very, very nice camera. So the C-Mount lenses are not motorized, so we have developed a kind of motorization. So you can get lens motors that are fully controlled from the RCP for the iris and the focus, because as I’ve told you, that’s fixed focal length, so you don’t have the zoom. But then for the iris and the focus you can control them remotely. And iris control is also something really, really important.
Stephane Dubocu (00:46:58):
That’s the Micro Pan Tilt Head that we have. So you remember my video here, so that’s the video. So that’s the control with the CyanView and the joystick. That’s the Micro Pan Tilt Head, it’s really smooth. You can really use presets position, record them from the CyanView. You can also control it from the CyanView itself because it’s a touch panel, it’s a touch screen. So you can play with your finger on the touch panel to move the Pan Tilt simply like that. Or of course, even the joystick, like I do is of course better.
Jim Jachetta (00:47:41):
I’ve seen the operators following, the hockey puck in hockey. I don’t know how they do it. They got the PTZ controller and they can’t blink-
Stephane Dubocu (00:47:55):
Yeah but you need a very fast camera.
Jim Jachetta (00:47:55):
… they don’t blink, the operators don’t blink.
Stephane Dubocu (00:47:59):
Yeah that’s crazy. You don’t see that… When I just attending that, I don’t see the… Or do you say the puck or the-?
Jim Jachetta (00:48:11):
The hockey puck. Yes, the puck.
Stephane Dubocu (00:48:13):
Okay. So it’s so fast. That’s where you need slow motion.
Jim Jachetta (00:48:19):
Stephane Dubocu (00:48:20):
Yeah slow motion of course, for the production, but also for the referees. Also important to see that the puck is inside or cross the line, or so it’s also interesting for referee camera. I will come to the slow motion just after. So Micro Pan Tilt impressive Micro Pan Tilt Head look at the size of the XLR connector. So you see the size of it, it’s really, really small. So don’t forget that the camera that you see on the pictures is this one. So, it’s really, really small. It’s weather resistance so you can use it on the outside and combined with my waterproof camera, so you can put this Pan Tilt outside in the rain and it will work. So there’s a famous boat race at the moment that is really, really famous with really high speed boats.
Stephane Dubocu (00:49:13):
I cannot name the race, but it’s really, really famous. They are using those ones on the boats. So it’s really on the sea and it works really, really good. So, Micro Pan Tilt Head, so same gears with my lens motors and 4K. So it’s supporting all my… So what you see here is really the camera. So this one is really my Micro Pan Tilt with the ATOM one mini AIR that you see here with the CyanView. So that’s the Ci0 below it with the tally on. And again, I have big fingers, but still it’s really small. It’s really incredibly small and you have like a full 1080p HDR, remotely control, and really for not a lot of money because the Pan Tilt is the same kind of pricing that the camera. So for 2000 something, you have like a broadcast camera with a Pan Tilt Heads [crosstalk 00:50:18] video.
Jim Jachetta (00:50:18):
So as you said, the CyanView box, what is that a CIO? That’s doing the serial to IP conversion?
Stephane Dubocu (00:50:27):
Absolutely. So you see that on my left pictures. So I have this RG 45 coming in the camera interface, and then they have two Hirose connectors. And one goes in the Micro Tilt Pan Head for the control of the Pan Tilt Head, and the other output goes directly in my camera. So the Ci0 can control two things at the same time, can be two different cameras, or a camera and an accessory like this Pan Tilt Head.
Jim Jachetta (00:50:58):
And you see how Stephane has it mounted, that the CyanView has a quarter 20 Mount on it so you can put it between the mount or the tripod and the camera.
Stephane Dubocu (00:51:09):
Everything put together.
Jim Jachetta (00:51:10):
Stack everything all up nicely together. It’s better than velcroing and duct taping things together but it’s a nice integration.
Stephane Dubocu (00:51:19):
You can do that if you really love Velcro, but you have really an engineering… David is a clever guy, so he did it really well. And the last one is just a normal joystick just to show the size, you see there’s no Photoshop behind it’s really the Pan Tilt next to the joystick.
Jim Jachetta (00:51:41):
The joystick’s bigger than the camera and the Pan Tilt Head?
Stephane Dubocu (00:51:45):
Yeah this is really crazy. We did some waterproof housing for the other camera. So we have the small camera that I’ve shown the ATOM one mini Waterproof, for whatever reason at the moment, it works only with the 4K mini 7, but we are working also on housing that will fit all the cameras to protect them. It looks big but look again, in my hands it’s really small. So it’s really a small product. Slow motion, so let me take some time to explain you the slow motion because it’s a really cool system that is quite unique. Again, unique for two reasons. The first one is because of the price, it’s really cheap. I’m always saying that I was fighting with my CEO because I thought it was too cheap that as we said, we are leaving money on the table. And my CEO said, “No, we just want to enable all our customers to be able to reach that technology.”
Stephane Dubocu (00:52:52):
So I think at the end of the day, he’s right, because there’s really nothing, Jim, on earth at that price. We have a broadcast quality camera, there are some competition. I know it really well because you know, my past I-Movix and the slow motion, so I can tell you that the pricing is nothing to do. So let’s say that we are around 10K U.S dollars for that camera, I-Movix was like 200K. So it’s really nothing to do. Don’t get me wrong, of course if you go to a Sony slow motion, the 4800 for instance, that’s, 100K let’s put a price. Of course, I’m not saying that we will replace them, but for the money and for the price different, we are not 10 times less good. It’s really close-
Jim Jachetta (00:53:50):
Not every production has the rental or the purchase budget for a hundred K to be [inaudible 00:53:58].
Stephane Dubocu (00:53:57):
Exactly. Now they can have it. And I will compare it, it’s maybe a very good comparison or not, I don’t know, but with the Phantom camera, because we have the same kind of workflow than the vision research Phantom camera, I will explain you why. So the camera, again, trust me it’s smooth and it works really fine. So I’m just shooting a turntable here. I got this again, very easy… Sorry, I need to change my angle because I want to show you this. So I got this jog shuttle to control, it’s just a basic shutter pro. So you see, it’s really basic, costs really nothing linked to the CyanView and CyanView make the bridge to the camera. So let me come back to the replay output so really easy. Or maybe I can do that so I can show both at the same time.
Stephane Dubocu (00:54:54):
So you have the live and the replay that is always available. So that’s two independent outputs. So live is strong. It’s a strong output. So you have always the live that the director can just cut live. And then you have the replay, which is independent, and it’s a different video processing. So you could have different colors if you want, nobody’s doing that, don’t get me wrong, but you could really have different colors if you want. And just so at the moment I am in what I call the trigger mode. So I’m triggering the memory. So I’m recording internally in the inside memory. I’m just triggering the memory, and then I have my slow motion 500 frames per second. Of course and then [crosstalk 00:55:40]
Jim Jachetta (00:55:40):
Right. And jerkiness we’re seeing in these videos is due to the very low and jerky frame rate of go-to webinar.
Stephane Dubocu (00:55:47):
Yeah. I’m sorry about that. But I can tell you, in the next video, I hope that you will be able to collect the video in a nice way, because it’s really impressive.
Jim Jachetta (00:55:56):
We have some Vimeo recordings that I will put up on the blog post.
Stephane Dubocu (00:56:01):
Yes please because it’s really…
Jim Jachetta (00:56:01):
Put up on the blog post…
Stephane Dubocu (00:56:01):
Yes, please. Because it’s really working fine, so I can tell you. So, same video processing, FPGA. I mean, not the same FPGA. It’s a really, big FPGA in that one because you need to under 500 frames per second. So, as you can imagine, the amount of data is huge, but the same video processing. That was my point. So, same Multi-Matrix, same everything on all my cameras. So, if I want to change now the… So, let me come back to this one if I want to change the red again. I can go to my paint in Multi-Matrix. Enable the gate effect and change the red of my car, if I want. So, really same video processing. So, you can really paint it like a normal broadcast camera if you want. So, that’s [crosstalk 00:56:59] the first…
Jim Jachetta (00:57:00):
Back to [Reading 00:57:01] control, right? With slow-motion cameras, you need a lot more light, correct? [crosstalk 00:57:09].
Stephane Dubocu (00:57:09):
That’s absolutely right.
Jim Jachetta (00:57:10):
So, then the [Color Balance 00:57:10] could be off.
Stephane Dubocu (00:57:10):
Of course, because you are taking 500 frames per second, so the shutter is closing 500 times per second. So, you take a tiny amount of light.
Jim Jachetta (00:57:21):
Stephane Dubocu (00:57:22):
But, we are using the latest generation of Sony High-speed Sensor in that camera. Don’t get me wrong. The ATOM one is more sensitive, of course. But it’s only 50 or 60 frames. That one is one of the best of the sensitive camera in slow-motion that I’ve seen on the market. So, it’s really using the latest technology. So, I mean, in U.S., I don’t see any problem because most of your lighting and stadium are really well-equipped.
Jim Jachetta (00:57:52):
Stephane Dubocu (00:57:53):
In Latin America, I mean, we sold two cameras in Argentina recently. They were used in Brazil. And that was working fine. Even with bad lighting because… I mean, of course, the level of quality is different. I have some pictures. I will show that to you just after it was working. But, in America, in U.S., that will be perfect. So, really no issue here.
Jim Jachetta (00:58:17):
Stephane Dubocu (00:58:19):
But, you are right, sensitivity is really important. So, I will not explain every detail. This workflow is interesting. Because, that’s the two workflows we have. So, the camera is a kind of two cameras if you want. So, we have the trigger mode and the SSM mode. So, trigger mode, you record in the internal memory of the camera continuously. You have 60 seconds of recording which is unique, Jim. There’s nothing on Earth that has that length. So, just to give you an example 60 seconds of recording means at 500 frames per second almost, 10 minutes of slow motion. So, nobody in a broadcast production will use 10 minutes of slow-motion, live. Because it’s too long. I mean, everyone will just get out of here, of the show.
Jim Jachetta (00:59:15):
The [crosstalk 00:59:16] So slow.
Stephane Dubocu (00:59:18):
You use that for the highlights, for a smile, a football player. That is really happy or, it’s really for this kind of emotion shots. It’s just a few seconds. So, really 60 seconds is really long, but we split the memory in two banks. So, we have different banks. And so, you can record and treat the bank one while it’s still recording on the bank two. So, you can playback your bank one for instance. And something happens because there’s a fight or I don’t know, something, a nice flag flying on the crowd when people will come back in the stadium. Then, you can trigger a second time while you are replaying the first bank. So, now the second bank has something ready to show. So, I four bank, I’m also recording on the third bank. And so, you have up to four banks. And really, honestly, that’s more than enough. You don’t need more than that. I don’t see like four close-action, totally different, that you should have. And then also the camera operator needs to have it in sight.
Jim Jachetta (01:00:31):
Maybe Stefan, I think I understand the trigger, what it’s used for. But maybe explain that like, so the goal is scored. The, the hockey puck is in the net. You [crosstalk 01:00:43] wait for the action to stop,
Stephane Dubocu (01:00:45):
So it has been recorded. Because it’s loop recording in the memory, and then you have, let me switch to this one. So you have the slow-motion remote I’m just triggering the memory, And. Now I can replay the last live 15 seconds and go back in the past. To see if what has been shot has been a goal, or has been a fault, or something like that. So, you can come back and I can free the bank and come back to live. And I can do that four times. So I, for banks to do that.
Stephane Dubocu (01:01:23):
So one example would be a goal, a hockey puck that goes in the goal. So, you trick to get the goal. And then you have the trainer that is really mad, from the other team really having like bad reaction. You can also record it. Then there’s a fight between the hockey players at the same time. Then you can do that. And then there’s a beautiful lady waving the air in the crowd and then doing some dance or whatever. And then, you can also trigger the memory and record that girl at the same time. So yeah, it’s really to [trick 01:02:01] trigger mode, to push a button, to stop the recording. So it’s really frozen the last- [crosstalk 01:02:08]… 15 seconds.
Jim Jachetta (01:02:09):
Saved the recording.
Stephane Dubocu (01:02:10):
Yeah, exactly. It’s saved in the memory. If you would have a problem, I don’t know. You restart the camera. That clip is saved. So it’s really saved in the camera. It’s not like ROM. It’s really like an SSD drive in it. So, it’s really recorded on the SSD drive. And then you put them on SDI live. So, the workflow is that. So if I come back to this one, the workflow is live. So, you just replay back the live, from the memory. And then the director can directly cut it on the air because it’s a Live SDI. I mean, it’s a Replay SDI feed. So and Compressed SDI. So we don’t really make compressed clips like Apple ProRes clips inside. It’s really the live SDI that we have frozen.
Jim Jachetta (01:03:06):
There’s no compression, it’s a- [crosstalk 01:03:09].
Stephane Dubocu (01:03:08):
It’s no compression.
Jim Jachetta (01:03:09):
… in a live recording, you trigger at the end of the event you want to catch. So the goal just happened. I hit the trigger. Now I capture the 15 seconds before the trigger. So I got, I got the player coming to the goal. I got the puck, being shot into the goal. So- [crosstalk 01:03:32].
Stephane Dubocu (01:03:31):
It’s inside. The action is done and triggered. There are ways. I’m not sure that we have already implemented that, but there’s also a pre-trigger time. So you can also trigger, and then it will just take the next 15 seconds. So, there’s two ways actually. But I’m not sure- [crosstalk 01:03:52]
Jim Jachetta (01:03:51):
Before of after the trigger? Pre or both?
Stephane Dubocu (01:03:53):
Jim Jachetta (01:03:53):
Stephane Dubocu (01:03:54):
At the moment it’s before, but we could do after. It’s just a question of programming the camera. I’m not sure it’s already activated, but if somebody really needs that…
Jim Jachetta (01:04:05):
Stephane Dubocu (01:04:05):
Like for Horse racing, the finish line, so when the first horse is really coming next to the finish line, you trick, and then it will take the next 15 seconds. That would be the application, for instance. But usually, the action is happening. You wait for the end of the action. And you trick, and you go back in the past. That’s the usual workflow. Trigger mode, and then we have SSM mode. So you know that on the market, Jim. You have like SSM cameras, super slow-motion camera. So you are pushing phases. So like 20 years ago, maybe? No. Because time is flying. You had the first Sony 3x, Grass Valley 3x. So we do the same.
Stephane Dubocu (01:04:48):
So we have two phases, three phases, four phases out of the camera. So, you can reach 240 frames per second with a continuous recording. So you push that in an EVS. You push that in a DreamCatcher, whatever ROS server. So you push that in a broadcast server. And then it’s recorded in the server, but continuously, so everything is slow-motion in this case. That’s a very interesting workflow also because then you have everything in slow motion. And the operator is just using a server like usual, of course. Pro and cons. Then you reach only four, four times 60 frames per second. So 240 max. So if you want to reach higher frame rates. You need to go into trigger mode. But because it’s the same camera. You could make the first half time in trigger mode. And the second half time in SSM mode. You just restart the camera in the auto-mode, and it’s ready. So you can also be a bit creative around that. So, I hope that we don’t lose too many people at the moment.
Stephane Dubocu (01:06:04):
Because, I’m getting to the end slowly. I would like really, after the slow-motion of course, to speak a little bit about the Barracuda and show you the used case, two nice case study that we had recently. So, let me make [crosstalk 01:06:19].
Jim Jachetta (01:06:18):
This is all good stuff, Stefan.
Stephane Dubocu (01:06:21):
Jim Jachetta (01:06:21):
We’re learning a lot.
Stephane Dubocu (01:06:23):
Excellent. And so the SSM500 has two models, two versions. One is C-Mount and one is B4 Mount. That’s two different models. So we cannot change the D-Mount unfortunately, because the D-Mount was coming too late, you know, development from the market. So we have two different cameras, but I don’t think it’s a huge problem because first of all of the price of the camera, but also because it’s two different applications. So C-Mount would be more like on the board cam or like on the crane, or like on the gimbal. So really close to the action like a POV slow motion with a wide-angle and B4 would be more like a static camera with a camera operator with a big box lens broadcast B4 in front of it.
Stephane Dubocu (01:07:16):
So, again. I’m not saying I will replace all the existing slow-motion, but I’m sure because of the budget. That will be super appealing for our customers because it’s really incredible when it comes to the price. So, it’s two different applications, I guess, for me. Controls, you’ve seen it. So this is a ShuttlePRO. We can be controlled by Cyanview. They have integrated the ShuttlePRO, but also the JL Cooper, those American brands. So it’s also possible. That’s the kind of application we have at the moment. So like on the pole cam, so the one on the left is from tournaments in Argentina. So they were using that on the football for Latin America. So a kind of super big, final stuff. So those guys in Argentina are already using this camera in Latin America. So on the right, that’s, I think. Oh, I will be killed by the football fan in Europe. I don’t recognize the stadium, but I think it’s… [crosstalk 01:08:23] Manchester United.
Stephane Dubocu (01:08:24):
Please don’t kill me if it’s not that. It’s in UK, that’s for sure. I think it’s Emirates, Manchester, but, again I’m not totally sure, but yeah.
Jim Jachetta (01:08:35):
You’re in big trouble.
Stephane Dubocu (01:08:37):
Yeah, I’m in big trouble. And one of the other applications that you see above is the Atom one on the kind of magic arm behind the goals. We have a wide-angle from COA. So that’s the kind of soccer because that’s all football, of course. Another very interesting application is what we call the Flying cam, the wire cam. So that’s those cameras that are flying over the stadiums. Then they can use my camera because it’s light, small, and with slow motion. So that one was used recently in Portugal in Porto. Which is one of the big football soccer teams in Portugal. That one I know because it’s written on the stadium so I can read it.
Stephane Dubocu (01:09:24):
So, that was a nice application. When I come to application, so maybe some example of what we are doing, Jim. And I have a nice video to share with you guys in a minute. So, of course, because of POV mini-cameras, Reality TV shows really important. So shows like Ninja Warrior, you need really tiny cameras in tiny places you cannot put the big broadcast camera. And then, you are using my camera. Super Bowl, so that’s what I was telling you. So we had that in the line-to-gain Pylon cams for the Super Bowl. So I’m not really into the rules of American Football, of course. So I’ve been told that line-to-gain is an important one because it’s moving with the teams.
Jim Jachetta (01:10:19):
Stephane Dubocu (01:10:19):
So it’s removable, it’s also wireless. So of course, super proud to have been on the Super Bowl.
Stephane Dubocu (01:10:28):
Daytona 500. So I have a very, very nice video. And let me show it to you. So sorry for the quality but again, I can tell you it’s fantastic. So, they use that on drones, so Fox was really flying next to the racing cars. And again, I can share the video with you, Jim, because it’s really nice. They were really closely to the action, fully shaded and painted by Cyanview use live. Because they were able to match the other cameras.
Stephane Dubocu (01:11:01):
So imagine the potential that you can get with drones on those applications. So, that’s really cool. So a very nice application that we had in US also recently. So it’s car racing, you see ice hockey, athleticism, football, ski, anything you can imagine where you need onboard cameras, that’s the famous secret for NASCAR. So Fox is using the 4K mini 16 for NASCAR applications. Or maybe let me come back one second to that one. They didn’t get what you said, Jim, that the camera interface is… It’s clever so that you have a screw, so they just put tape around. So, it’s not the way it should be as you know. So it should just be- [crosstalk 01:11:55]
Jim Jachetta (01:11:54):
And maybe they should’ve used 1/4-20 mount. Maybe that tripod there didn’t have a 1/4-20 mount- [crosstalk 01:12:00]
Stephane Dubocu (01:11:59):
I don’t know but, yeah. That was funny because that’s not what it should be. Right? So, another sport that I don’t understand it’s cricket. So it’s the UK Baseball. So also, very interesting for slow-motion because it’s also a very fast sport so that that’s a very good application. Or in the grass. Because you want to really have shots you see on the right, you have some cameras really in the ground. So that’s also interesting. A sport that is becoming really popular in Europe is field hockey, because we have no ice in Europe. So, we put that in the field. It’s really becoming really popular, E-sports, a huge, huge market. Of course.
Jim Jachetta (01:12:53):
Yeah, big market here in the US yeah.
Stephane Dubocu (01:12:56):
Super important. News. So because of the combination of the camera are so small with a global shutter. With a Barracuda, you can easily make flight kits. And make something easy to transport. And to make a quick setup for whatever. I think on the left, that was the game Spelling Bees. I think that’s the famous Spelling Bees that you are even in US when you need to spell the words. Right? So, they were using that with the ATOM one.
Stephane Dubocu (01:13:34):
And then cinema, because of the quality of the camera, we have been on a famous production from a famous director in US that makes millions of inference with some blue characters in 3D. So you certainly know the kind of film I’m speaking about. 3D is really interesting with our cameras because, 3D still popular in, in cinema or documentaries, because it’s really difficult to put in place with the depth of field and… But if you have really tiny cameras, then it’s really helping, of course.
Jim Jachetta (01:14:16):
Yeah. You can get them close together to have the proper… [crosstalk 00:01:14:20]
Stephane Dubocu (01:14:21):
Exactly. So, I mean, it’s not just the application. It’s because of the quality of the camera. But cinema is also an application. Concerts, of course. Very important application not just with the conductor that I was telling you, but having a small instrument on the… And what is… House of worship super important. And especially with the COVID time. So really, to spread the voice of God to the homes, always interesting to have that kind of application. So, I mean, there’s no limit into our application, but I think it was interesting to show a little bit of the application we can get with our cameras. Let’s speak about the Barracuda, Jim?
Jim Jachetta (01:15:08):
Oh, sure. So, one question, Daniel here has got some really great questions. So he’s asking about, “has anyone used the cameras in at-home or a REMI production?” We’ve used the Cyanview technology with some of our bonded cellular at home productions. But yeah- [crosstalk 01:15:34] we just started working with Dream Chips. So, we haven’t really deployed many Dream Chip cameras yet through VidOvation.
Stephane Dubocu (01:15:40):
It’s really new, Jim. That’s normal. Yeah, we do. Of course. And it’s one of the two case study I’m having just after. So we’ll show you that just after. But definitely yes, because again of the quality, the size of the camera, so it’s easy to deploy by one guy because it’s small, so you put the cameras, you put whatever streaming device, I would prefer Barracuda, of course, but you can use whatever you want. [crosstalk 01:16:08] We are SDIs you use whatever system you want beyond, so.
Jim Jachetta (01:16:13):
Whatever the workflow might be. And then also the Cyanview helps to do their at-home production because RS-232, RS-422 doesn’t relate directly to the internet. So we go IP for control. That’s a very important piece.
Stephane Dubocu (01:16:30):
Yeah, exactly. And with the RIU that they have. So it’s really made to go over the public internet in between, even with LT connections, just for the control. We have a SIM card. So yeah, definitely. We have tools to make it like remote production. That’s sure. So Barracuda, let me come back, sorry switch again to this view. So, that’s the Barracuda. So you see again, it’s a small box, not big. Up to five SDI inputs on one box like that. Three different outputs. So we have the LAN, of course. So you just stream over Wi-Fi or a local network or public internet. You do whatever you want. We have antennas because we have an internal 4G modem, LTE modem, that’s a CRM modem. So that’s the most used in the industry. We don’t do bonding. So that’s also one of the difference with the competition, which is also at a different price level.
Stephane Dubocu (01:17:34):
So I guess that my product is interesting, again, because of the price, but we have different features and. We have also ASI output. So we can also transmit DVB, SDI. If we want. So using that for, I don’t know, satellite communication or any ASI wireless RF transmitter, if you want. So, imagine like four or five cameras. H. 265 compressed with one ASI output over RF, then the decoder on the other side. And you have five cameras over a single RF channel can be also very interesting for this kind of application. So we have- [crosstalk 01:18:20]
Jim Jachetta (01:18:20):
You have five SDI inputs. So, do you have five independent and encoders in there?
Stephane Dubocu (01:18:25):
Yes. And they are really independent. I will show you that just after. So when it comes to five inputs, because we have a bandwidth FPGA limitation, of course. When it comes to five cameras, that’s 5x 1080P 30. For four cameras, then it’s 4x 1080P 60. And 1x 4k.
Jim Jachetta (01:18:49):
Then, 1x 4k is another option?
Stephane Dubocu (01:18:56):
Jim Jachetta (01:18:57):
That’s interesting. So, it’s just a throughput limitation. If you want to do five it’s 30 frames per second- [crosstalk 01:19:04]
Stephane Dubocu (01:19:04):
Otherwise it’s four. Let’s calculate on four.
Jim Jachetta (01:19:07):
Stephane Dubocu (01:19:08):
You see that on the connectors. I’ve also the same, Hirose connectors that I have on my camera. So that means that if you battery power this Barracuda. You can power my cameras, but also control them. So if you use the Cyanview on the other side. Then you can also other IP. We do a kind of serial terminal over IP to also control remotely my camera. So, we cannot put RS45 at 115,000 volts. So it’s a really quick bidirectional- [crosstalk 01:19:42]
Jim Jachetta (01:19:42):
So, You don’t necessarily need the Cyanview. You can port the serial controls- [crosstalk 01:19:49].
Stephane Dubocu (01:19:49):
I mean, the Cyanview for the RCP, for the shading, but I mean, that’s an option but, when it comes to flexibility and control different units because that’s really limited to my camera. So if you have a pan and tilt head then it’s better to have the RIU on the other side. Because our I0 from Cyanview, because then you can control also different units.
Stephane Dubocu (01:20:12):
But if you have like an onboard camera. Like four onboard camera or five 1080P, 30 camera, then you don’t really need other accessories. So then you can put everything in the Barracuda and just string them over. So interesting information too, Jim. Because we don’t do bonding, but not just because of that, imagine that you have a problem with your network, or even over the public internet for whatever reason, we have internal as the SSD drive in the unit. So everything can also be recorded on the SSD drives. So that means that when you retrieve the connection through FTP, you could download the files, and you don’t lose anything. That’s the idea behind it. And we have also… I don’t know if you see it, but we have the second internet. This is a prototype. So I have only one VNC, but there’s a second one on the picture.
Stephane Dubocu (01:21:11):
That’s a GPS. So we can also transmit the GPS position of the unit. And the application would be like you are following a marathon or a bicycle race. And you can transmit the position of the motorbike that is following the runners. And then the mobile unit can make graphics with a city map. And putting the position of the runners, or things like that. So we can give the GPS position. And we have a canvas interface as well. So we’ve also a very famous motorbike competition in the world. They put the Barracuda on the bikes. And we can also retrieve information like RPM speed and make graphics on the other side and transmit that on the IP sockets. On the other side, they retrieve the data. And they can use them to do whatever they want.
Stephane Dubocu (01:22:05):
I have some workflow, so I will not read my slides. This is what I’ve explained. So I have some workflows that we did recently. So because of COVID, we had in our office in Hamburg. So the other office then Hanover, we have a church next to it, and they wanted to just… Stream live the kind of organ church concert. So five 1080P, 25, that’s Europe. External audio connector. So you can also make a good… A microphone or whatever. And getting an audio signal that you put in the box that you embed over the SDI. And then. You can retrieve it on the other side, on the decoder, over the SDI, or on the audio out. And then you put that in your audiometric, whatever.
Jim Jachetta (01:22:54):
So it will take embedded audio from the camera SDI inputs, but also you have analog audio inputs- [crosstalk 01:23:00].
Stephane Dubocu (01:23:00):
It’s like that.
Jim Jachetta (01:23:00):
… line level analog audio coming in?
Stephane Dubocu (01:23:03):
Jim Jachetta (01:23:04):
Stephane Dubocu (01:23:05):
That’s exactly the case. And then on the other side. You just output it using vMix or whatever. And you can stream online. At the moment. We have no RTMP. Or let me maybe show you the next slide because it’s explaining that. That’s vMix, but then in this case with vMix you just need an encoder because then you get the SLT stream in your computer and you don’t need a decoder from us.
Stephane Dubocu (01:23:32):
So, the Barracuda is an encoder and decoder. It’s the same product, Jim. So what does it mean? Your customer can make a job one day having a point-to-point connection, having five cameras in five cameras out, and the next day they can just by updating the firmware. They can turn the decoder into an encoder. And so they have two encoders. And they can have like 10 inputs in this case. So it’s all flexible. There’s no license there’s no nothing. It’s really flexible. Same product.
Stephane Dubocu (01:24:03):
`There’s no license, there’s no nothing. It’s really flexible. Same product for the encoders and the decoder.
Jim Jachetta (01:24:06):
Can you mix and match? Can I make three encode, two decode in the same box? So-
Stephane Dubocu (01:24:13):
That’s the one million dollar question that we have every time. Not at the moment.
Jim Jachetta (01:24:18):
Stephane Dubocu (01:24:19):
That’s definitely on the roadmap and especially with remote production, that’s important to get the video return feed to the place. So streaming the cameras but getting the program back…
Jim Jachetta (01:24:30):
Well, the solution to that problem is buy more boxes and they’re not very expensive.
Stephane Dubocu (01:24:36):
You’re a good salesman. [crosstalk 01:24:38]. I mean, just compare the pricing. The competition… I mean, they are really good. Again, don’t get me wrong, but we are really, really affordable for the features we have. So maybe it’s a good calculation, but we have that on the roadmap. It just demands a full FPGA reprogramming from scratch because we need to route the STI differently. So it’s really a big [inaudible 01:25:11] and we have a very interesting roadmap. And so we have really some priorities. We will get there because it’s really important, but not at the moment. But that’s definitely on the roadmap because that’s important. It’s really more than nice to have. That’s really important to have it. So this is what we have at the moment. So sorry, I see that you don’t see all the… Maybe if I do this, yeah. You’ll see.
Jim Jachetta (01:25:37):
Oh, yeah. Yeah, that’s better. [inaudible 01:25:38].
Stephane Dubocu (01:25:39):
[crosstalk 01:25:39] better. So that’s what we have at the moment. So that’s really a picture of the features we have. So five camera, four cameras and one 4K. We have one stereo channel per SDI that we can transport. The goal is to have eight stereo channels per SDI. So we have the bandwidth to do that, but it’s on the roadmap. If you put five cameras, we have already 10 channels on the SDI. We have an external audio in and then go out on the other side. But actually Jim, as I’ve told you, that’s the same product. So actually we could also have a kind of in-out. So having intercom in between. Having also communication back and forth over the audio. So having really a local at distance intercom, if you want. So that’s also possible.
Jim Jachetta (01:26:41):
So you do 4:2:2 and 4:2:0. You do eight and 10 bit?
Stephane Dubocu (01:26:44):
Yeah. So really important. So when it comes to my products, so we are 4:2:2 10 bits. So it’s really high quality HDR. So we are also transporting HDR. But when it comes to compatibility with other SRT decoders or clouds data centers, there’s more and more production in the cloud, but most of them are supporting 4:2:0 eight bits only. So we do support that, but we really push our customers to use 4:2:2 10 bits if they use Dream Chip products, of course. But we want to be compatible with the widest manufacturers in the industry. So that’s why we support 4:2:0 eight bits. And 4:2:0 eight bits will be also important when we’ll get our TMP stream to get to social medias. So we don’t have it now.
Stephane Dubocu (01:27:42):
So at the moment we support H265, because we believe that it’s really the latest compression codec. We have H264 also in the box, but not the in GUI. So it works already on the control line level if you want to have H264, but if you go into a web interface because everything is controlled… So maybe it’s interesting that I switch now to this one. So I can just reach a webpage and have all my menus accessible for the transport. So I can define transport, which can be RTP, SRT or ASI. I can make multiple streams in the same transport pipeline or making different pipeline, which are independent.
Stephane Dubocu (01:28:31):
Check this Jim. So for instance, I have four cameras. So that’s the four cameras that you see, of course live, with the same PID number. I do that because I’m streaming four independent streams. So the PID numbers can be the same because it’s all independent. So, I like this way because they are all independent. The pros and cons as usual. The downside is that you have more overheads because you have multiple pipelines. So it makes a bigger bandwidth at the end, not that big, but it’s bigger than having only one pipeline with multiple stream units. So it’s a question of what you have as bandwidth and that’s it. So the way it works, Jim… So you need to define the transport. So the transport is really… Okay, what pipeline am I using? Am I using an SRT stream, an RTP stream or ASI stream?
Stephane Dubocu (01:29:33):
So you need to define it. So if I click on it, you see for instance my SRT1, SRT of course you have caller, listener and rendezvous mode. So you define that. You put the destination address. If you have a stream ID for SRT, you can put it there. If you want to encrypt the stream, you can encrypt it here. And then you have some statistics. So if I refresh my page, so you see my statistics moving and you see exactly where we are. So I’ve 10 Megabits of bandwidth used, but I’ve 60 Megabits. So the first line is what is available for me. So I could really increase my quality if I want. And so I’m defining the transport. So first thing is defining the pipeline.
Jim Jachetta (01:30:26):
Stephane Dubocu (01:30:27):
Then I go to the program and program is really defining. So let me click, for instance, on SDI1 program. It’s really defining… Okay, let me show you that first. So you have always a contextual help when you are leaving the most on one of the indicators to see exactly what you can do or what you cannot do. And unlike for the presets… so we have different presets. We explain what kind of latency we can reach with those presets, what is the kind of quality we can reach? So you know that this magic triangle between latency, bandwidth, bit rates, and then you need of course to put this triangle. So we decided to make four presets that you decide. So when you go to four, low latency presets, which is the quickest one, we have low latency, we have reduced latency, normal latency and high quality. Of course high quality is buffering more, but you get more quality. So it really depends on what you need to achieve.
Jim Jachetta (01:31:39):
What the application is or the quality of the connection.
Stephane Dubocu (01:31:42):
Exactly. And in transport, then you choose the transport that you have defined before. So that’s where you make the link with the transport you have defined into the program. Once this step is done, then you can do it four times like I do here, four 1080p50 streams. You can see. So what is the frame rate? And maybe you can see it here, or maybe I can show it on the SDI. So let me show you the other SDI menu. So if I click on SDI, I can see what is connected on my Barracuda. So at the moment I have two 1080i50 camera and two 1080p50 cameras. So I can play with different frame rates, different interlace of progressive… Just a quick remark. We have found that it’s better to use progressive with this kind of H265 compression, because when you have interlaced, you are really encoding two half frame independently and to get the same level of quality than progressive, you need to increase much, much, much the bit rate than progressive. So it’s really better. It looks like it’s bigger to transport, but it’s not at the end. Really progressive-
Jim Jachetta (01:33:04):
[crosstalk 01:33:04] is more efficient.
Stephane Dubocu (01:33:06):
And it’s much more efficient for the quality. So I really advise to go to progressive. That said, we do interlaced. We don’t down scale in the units. So if you want to get a i into p or p into i, you need to do it from the camera itself or after decoder [inaudible 01:33:24]. And you see also the audio that I have. I don’t know if you will see that, but I… Yeah, you can see it. If I click on SDI, I can select also… that’s where I select between 4:2:2 10 bits, 4:2:0 eight bits and the different intermediates that I can play with. So because I was defining four independent streams, then I do really four independent streams. So that means they can be different with different formats, with different YUV compressions. It’s total independence.
Stephane Dubocu (01:33:57):
If I click on network, that’s where I just easily define my IP address. So using a static IP address or a DHCP address. And that’s also where I can define my SIM card with my APN connection pin code, if necessary. So the way it works, if I put a SIM card, it will always take in priority the LTE modem, and then it will` falls back to the ethernet if I have no communication. So if you want to use LTE, you just put the SIM cards, define the pin and it’s done. It’s really, really easy to do.
Jim Jachetta (01:34:34):
How do you put the SIM in? Is there a slot externally to put the SIM in the unit?
Stephane Dubocu (01:34:38):
Absolutely. So when I come here… Let me show you that. So on the back of the units… Let me maybe show you that here. I have two slots here.
Jim Jachetta (01:34:50):
Stephane Dubocu (01:34:51):
One is for a SIM card and the other one is for a SD card. So I can recall an SSD drive or SD card or USB. I can also put some drives on the USB. We are supporting the Linux inside. It’s a Linux based, of course. So you could also put a WiFi dongle if you want directly over there. So…
Jim Jachetta (01:35:16):
Stephane Dubocu (01:35:19):
So network updates, of course, is to update the unit. And system is just to define the camera control. So that’s where you do this kind of tunnel in between. That will be super practical when we get the VPN. So VPN is not implemented yet in the Barracuda, but having the same network under a VPN that you can define, then that will also simplify a lot of features on the Barracuda itself. So…
Jim Jachetta (01:35:49):
Stephane Dubocu (01:35:50):
And of course, I exactly the same. So if I switch to my decoder, same story. You define transports. So now I’m listening. Remember Jim, I was calling on the other side.
Jim Jachetta (01:36:01):
Stephane Dubocu (01:36:01):
Here I’m listening to the port that I was mentioning on the other side. And then I define… Okay, in the program, what is my transport I’m using? What video port out I’m using? What do I do with the audio? Am I using the SDI embedded? Or am I using the analog input output? What is my preset? So again, basic GUI but at the end you do everything. So I like it because it’s finally super practical and really easy to use. So…
Jim Jachetta (01:36:37):
Simple to use. Yeah. Yeah, that’s important.
Stephane Dubocu (01:36:39):
Yeah, I think so. So come back to the presentation. So I’m really coming to the end. I will show you the business case, the use case just after. So RCP control, I’ve explained that to you. ASI. So having like a wireless transmitter, RF transmitter in between, can be one option. And then the case studies. So we come to the last part.
Jim Jachetta (01:37:08):
So, ASI. Now the five HEVC streams are wrapped in a single ASI wrapper-
Stephane Dubocu (01:37:20):
Jim Jachetta (01:37:20):
Is that correct?
Stephane Dubocu (01:37:21):
Jim Jachetta (01:37:22):
And then that goes over the radio, and then you take that wrapper apart and give back the streams as if they [inaudible 01:37:30].
Stephane Dubocu (01:37:29):
Well that’s why it’s a super cool application, because you can have multiple SDI over one RF signal. That’s exactly that.
Jim Jachetta (01:37:36):
Very good, very good.
Stephane Dubocu (01:37:42):
Two case studies, one with Amazon Prime, it’s about rugby. So that’s maybe not something popular in US, but the point is not to speak about sports, but the application of the cameras. So I will explain you that, but let’s start maybe with the remote production. So actually we were filming in the stadium Wimbledon in UK, one guy with Barracudas, just putting mini cameras. They were just ingested in the Barracuda and then stream over. So that was in UK. And then that was streamed to Sweden, where there was a guy home that was racking the camera and making the production over a kind of clouds. So everything was… So no decoder in that workflow, just the encoder streaming SRT to a data center. And then the guy was having the full control of video layouts, titles, graphics and was making a production that was just put online on some social media.
Stephane Dubocu (01:38:53):
So very easy, simple because of the size of the camera. It was mix of 4K and HD production. And I mean really, really, really easy combining all cameras and Barracuda. So, really cool application because that… I mean, this COVID is of course a big, big, big issue, but it has also much accelerated the transition to remote production and to lower the cost and to put less people in the field and decreasing the travel costs, and so let’s take the good side of what happened. So it was just an accelerator for that. And I believe again that we are, with Dream Chip, in that route. So…
Jim Jachetta (01:39:44):
Right, right. Well I think we’re all evaluating how we work, how we live with this new normal. So we’re here to help facilitate what the new normal is going to look like, right?
Stephane Dubocu (01:40:01):
Yeah. So we’ll see… It’s just really long here because in Belgium, we have really those strict lock down. So I’m just seeing my computer and my bed like for a year. My last travel was US. It was a year ago exactly. So really I miss to see my customers, to visit you Jim and to visit some customers. So remote production is one thing, but also taking back our freedom and visiting the people is really missing me so much. So…
Jim Jachetta (01:40:35):
Stephane Dubocu (01:40:36):
The second workflow that I wanted to speak with you, Jim, because it’s really great. I really like it personally. So you recognize the football rugby stadium. Can be a football or whatever. It’s more close to football than or soccer, of course. So the idea here was to make… So on the poles of the goal, you see this two white boxes. So inside the boxes, they placed eight ATOM one mini waterproof with two barracudas, every waterproof camera was used with a pan-tilt heads and going over WiFi. So compressed with my H265 encoders, going over a kind of WiFi mesh network in between, then fiber solution to the mobile unit and everything fully controlled from the Cyanview.
Stephane Dubocu (01:41:32):
And what we got with this kind of pictures, is only feasible with mini cameras because you cannot put a big camera and you can follow the action with the pan-tilt, and you are really in the heart of what’s happening. So it’s fantastic. That’s definitely pictures that you cannot get with a normal camera. And again, green correction, milky matrix, you have everything here. So it’s really the combination of cameras, streaming, pan-tilt, control. I think we have everything here. So that was really, really, really, really, really cool. So that one was a super project. And also because it was for Amazon Prime in Europe. So it’s of course a big name and everybody was just happy with it. So it was really working great. And you see 3D… I mean, again, you don’t have the quality of the pictures, but believe me-
Jim Jachetta (01:42:42):
Stephane Dubocu (01:42:43):
Jim Jachetta (01:42:43):
Yeah. It’s stuttering a little bit but that’s the go-to webinar doing that.
Stephane Dubocu (01:42:49):
It’s really, really, really, really good. So again, a mix of everything and being really close to the action, waterproof and pan-tilt, which is weather resistant. So UK is really known too for the rain. So there’s no problem for… It’s not California, that’s my point.
Jim Jachetta (01:43:11):
We’ll get a year’s worth of rain in one hour. So when it does rain here, it’s pretty bad.
Stephane Dubocu (01:43:16):
Yeah, that’s crazy. Just [inaudible 01:43:17]. So it’s too much water at once. So I don’t know what is the best option at the end. So, yeah. So that was my really last slide. I will maybe just show you then for the social medias. And I mean, I know that Jim you will of course give the information to your customers. But we have a Vimeo page where very interesting, really, honestly, because it’s really technical, it’s really how you connect a Cyanview RCP, how you connect my lens motors. We made some lens comparison for the quality, with the name of the lenses, for the different cameras. The slow-motion… David made also some webinars with us for Cyanview, including EVS, how it’s connected to the EVS. There’s really a lot of videos that are really interesting. So, yeah. Vimeo is more popular than YouTube in Europe, why we use Vimeo. But you can reach that. And then we have [crosstalk 01:44:21].
Jim Jachetta (01:44:22):
What I plan to do is… A great place to go on the VidOvation website, go to vidovation.com and then go to resources and then webinars, you can sign up for upcoming webinars, but I put all the recordings of all the webinars there. So there’ll be a recording and then with Stephane’s permission, I’ll put some of these Vimeo videos in there. I’ll make a resource page for everything we talked about today.
Stephane Dubocu (01:44:55):
Exactly. Let me use the opportunity here, Jim, because we discussed that before. For the customers or the viewers at the moment, don’t go on the dreamchip.de website, because that’s the company website. And we did a dedicated page for the broadcast product, which is what you see here on my left, www.atom-one.de. So go there because you have also the latest information. I didn’t mention that but we have also kind of open source spirit at Dream Chip. So every protocol, every [inaudible 01:45:37], every technical information is available on GitLab. So GitLab is really the kind of software engineering open source. We are really into that spirit. So if you Google GitLab Dream Chip, you’ll get to our ATOM page and you’ll get all the information. If you want to integrate our cameras, if you want to use our protocols, if you want to make a special cable, if you… Everything is there. So, we are [crosstalk 01:46:08].
Jim Jachetta (01:46:08):
Spell that. What is it? GitLab? How do you spell that Stephane?
Stephane Dubocu (01:46:12):
So GitLab. It’s G-I-T-L-A-B. So like GitHub, but that’s the other GitHub. So that’s GitLab Dream Chip. So…
Jim Jachetta (01:46:23):
Okay. Okay. Well, I’ll put that in the blog post. I’ll get all that information, all the links, all the videos that you couldn’t see very well today that were choppy. I will [crosstalk 01:46:35]. I know our counterparts in Europe don’t sleep. So they are available to us. VidOvation would be your frontline of support and pricing here in the US and then we will bring in Stephane and his team as needed. It’s nice to hear that you have an open source mentality. Also [crosstalk 01:47:05] I heard a lot of things about… You’re adaptable to the customer’s workflow. Every customer’s workflow is different. Even with the triggering of the instant replay cameras that we can adapt to your workflow. And I’m sure that you see these names here, look very familiar to all of us. So they’re not new to the US market, but we’re opening to broaden the use of Dream Chip [crosstalk 01:47:34] mostly in the US with the help of VidOvation.
Stephane Dubocu (01:47:36):
Yeah. So, that would be fantastic Jim.
Jim Jachetta (01:47:42):
Thank you so much Stephane. Thank you everyone for staying tuned. Give us about a week. We’ll put all the information up online in a week. If you do have questions right now, reach out to [email protected] or call 949-777-5435. Thank you so much, Stephane. Have a great day.
Stephane Dubocu (01:48:06):
It was a pleasure.
Jim Jachetta (01:48:08):
Thank you, my friend.
Stephane Dubocu (01:48:09):
Thank you, Jim.
Jim Jachetta (01:48:11):
Bye bye. Thank you, everybody. Stay safe.
Stephane Dubocu (01:48:13):
You too. Bye-bye.
Jim Jachetta (01:48:15):