June 09, 2017

In the “Camera Shootout”

Tim Dillon, Head of VR & Immersive Content at MPC, sat down with us to talk about his Camera Shootout” initiative, its discoveries and surprises...and plans for ongoing analysis.

  • MPC, Technicolor’s Academy Award-winning visual effects studio, has run a Camera Shootout to test and learn about a new genre of virtual reality (VR), augmented reality (AR) and 360-degree video camera rigs.
  • The “Camera Shootout” reveals a number of exciting new features and key points of differentiation, among the various camera rigs now available to filmmakers.
  • Technology standouts and innovations include more intelligent and automated stitching, cutting edge sensors and light field technology, as well as the ability to more closely mimic people’s perception of scale.

Filmmakers have a growing variety of virtual reality (VR), augmented reality (AR) and 360-degree video camera rigs to choose from. But these rigs vary in their capabilities, features and innovation. MPC, an Academy Award-winning visual effects studio and Technicolor business, is running the Camera Shootout to try out a number of camera rigs, compare them, and discover new capabilities that will help filmmakers deliver exciting and powerful immersive experiences.

Tim Dillon, Head of VR & Immersive Content at MPC, sat down with us to talk about his Camera Shootout” initiative, its discoveries and surprises...and plans for ongoing analysis.

 

You have been researching the different camera rigs and configurations and new technologies that are emerging to capture video for the purposes of putting them into virtual reality productions. Can you tell me about what you went through, and what got you interested in this rather intense project?

Dillon: The “Camera Shootout,” as we call it, is actually an exercise in taking as many 360 camera rigs and camera equipment specifically made for shooting virtual reality projects, and comparing them.

It has given us a chance to see how each of them operate and the differences between them, from the perspective of what we call “capture.”  Additionally, we we’ve been able to assess abilities in post-production and any processes around the camera.

What we learned is that the camera rigs vary a lot. The functionality ranges as cameras span the gamut of small consumer cameras like the Samsung 360 camera which people might be familiar with… a little white ball of a camera… to something larger like a GoPro Odyssey rig which is made in conjunction with Google.

 

Tell us a little bit about the evolution that you’ve seen over the last 12 to 18 months.

Dillon: We’ve seen production companies -- VR studios like ourselves and others working in the space -- that need to look at all kinds of different solutions to capture 360 video. GoPro was one of the first small cameras to be really popular because of the size, and because it allows you to incorporate the basic idea of mimicking people’s scale of vision.

So, the simple way of explaining it is that you need a camera with two lenses that are separated by about the same width of your eyes. GoPro is an immediate contender in that.

And then it rolls out a lot of the larger cameras…the Reds, the Arri Alexas, the big professional cameras that filmmakers are working with all the time.

However, as much as everyone would love to work with those -- because there’s a lot of existing technology around them and processes around them, and the quality alone is fantastic – they are often too large to actually get a human scale, stereo scale, specifically for VR and 360.

It’s not a 3D movie question, which is contained in a 2D frame. This is a question about how it feels when you put that headset on. Does it feel like we’re at the right scale? Do we feel like a human sitting at a table? And that comes down to camera width.  GoPro was one of the first small cameras that people started building rigs for and 3D printing cases and putting things together in really interesting ways.

Over the past year, which feels like 10 years in the evolution of these cameras, we’ve seen Nokia bring out their Ozo rig, and commit to a professional camera that isn’t built from individual cameras. It’s one unit...and it has a preview system.

The preview system is really important to directors.  Without it...they may be in a situation where they go through a shoot...and then have to wait an hour as the shot goes through the stitching process...only to conclude that they have to re-shoot again.

That’s just mind-boggling for filmmakers who have spent decades working with existing systems that have a preview system.

We’re at the point now, though, with 360 cameras where preview systems are starting to come in through companies like Nokia, Google and some of the existing camera builders that people in filmmaking are used to working with.

 

How many cameras did you end up looking at? And where did you see things exceeding expectations, or that still left you wanting?

Dillon: It’s an ongoing project for us because these cameras are evolving all the time. But what we set out to do was take as many cameras as we could.

We took 15 rigs and we created a registration setup. We created a room in which we controlled the environment using objects in one quadrant, some actors in another quadrant just repeating a simple scene, green screens, blue screens, and importantly, experimented with a number of different light and distance conditions. We had some light variations in the room, so we would put a rig in the middle of our registration setup and we would bring the light levels down at different times and roll on those cameras, whether we had a preview system or not.

We made it a scientific, fair environment.

And then we also looked at distances, so we had a long room where we took the camera to five feet, then 10 feet, and 15 feet. We put out markers at those points in front of the camera where we were able to see in the footage that we shot. We asked ourselves: Are we seeing issues with clarity?

And then we looked at the stitching capabilities, which is a huge issue for 360 cameras. Some cameras, like the Nokia Ozo, have a built-in stiching system.  It comes with a software suite that allows you to take your footage out of the unit, upload it into the software and you get some immediate stitching.

 

What were some of the common challenges or problems that you found when it comes to stitching?

Dillon: Stitching’s really interesting actually. It changes from camera-to-camera. What we’re looking for is how to get a clean, automated stitch, which is what people call the “rough stitch” as opposed to the “fine stitch” which comes later with more finessing.

In the initial rough stitch, you’re looking for problems related to objects or people being too close to camera.  And then you want to see what happens as objects and people move through a quadrant where one lens ends and another lens begins.

Other things to check for revolve around how cameras deal with light or artifacts in the frame. It may be that an arm coming through the two lenses seems consistent and seems great, but then you notice some kind of mirrored or wobbly artifact moving over the arm. It’s not a break in the stitch, but it’s a problem because in post-production these things are a distraction. This can be serious because in 360 videos, your eye picks up on these problems fairly quickly, hence the need for as much perfection as possible.

Different cameras deal with these things in different ways.

Some of the cameras like the Nokia Ozo have optical flow -- as does the Google Odyssey rig.

Other cameras like GoPro rigs are actually shooting stereo pairs, so they’re actually just shooting in different directions and then you have to piece it together.

So it really depends on the camera. In the case of the Google Odyssey rig, this is a set of 15 GoPros configured in a circle, and it uses an algorithm that takes footage from all of the cameras and stitches them together into one 360 stereo image via Google’s website.

 

How effective are they? Are the algorithms sophisticated enough to deal with some of these issues that you’re talking about? It must be tremendously complex to automate that smoothing out process.

Dillon: Yes, it’s really interesting. And we’re seeing a company like Google committing to that process, which is really powerful and interesting.

They are constantly iterating on what that algorithm is doing.  They have a lot of people with these Google Odyssey rigs, and as footage is uploaded to their website, which goes into their cloud, there’s a team of people saying “okay, that’s interesting, this filmmaker has flagged an interesting issue. Let’s go email them, get their permission to look at the footage.”

They gave one example when I was talking to the team over there about a problem that we have seen here at MPC.

Imagine you’re on a beach, and there’s a row of slatted wooden fencing going all the way down the beach. When the optical flow algorithm looks at that repeated pattern of a white picket fence, it says “Okay, I can see this fence and I’m just going to repeat this through my stitch.” What happens is that it’s not intelligent enough in its artificial intelligence, to say “okay, I know exactly what the perspective is,” so things start to warp.

So, Google is working hard to make the algorithm smarter and we’re seeing those improvements. So, working with Google and being able to use their auto-stitching is definitely saving time and money for the VR industry.

 

What were some of the things that surprised you, whether it’s people getting ahead of a problem sooner than expected or vice versa, falling a little bit short and not quite addressing what needs to be addressed?

Dillon: I would say preview systems is one of those challenges. We’re seeing some good preview systems around certain cameras. The Nokia Ozo came out really high on that list because Nokia has committed to building a system. And they’ve worked carefully with the Foundry’s Cara plug-in, which actually helps you in post-production.

But Nokia Ozo’s preview system itself helps you on set. So that’s a powerful combination.

On a different topic, the sensor itself—the camera sensor—and its ability to capture an image, is definitely the top topic.

The GoPro sensors are fantastic. They are amazing little cameras for the size and frankly, for the commercial cost, which allows you to use multiples of them and put them together in interesting ways.

But it’s not the same as a professional, production-scale camera.

One interesting rig that we looked at was built out of Sony a7 cameras. It is nicknamed the Dark Corner rig, which is named after a company that started using it first, and it was built by a camera house in Los Angeles called Radiant Images.

They built a simple rig for Sony a7 DSLR cameras. It’s mono, it’s not stereo, so filmmakers who are always looking to shoot stereo don’t get that out of this rig.

But what they do get, to my point about sensors, is a higher-quality image that offers benefits once you’re in post-production.  You can grade it to a higher level. And for a lot of filmmakers -- especially on the top end -- that’s a key part of getting a good image.  It allows them to go into a color suite and allows them to get depth out of the image.

So there are different cameras out there for different needs...which I think is the way that this is moving.

 

What do you expect to see as you continue your ongoing evaluation? Where do you see the next frontiers of innovation happening? What are people telling you about what you should expect going forward?

Dillon: We’re seeing interesting developments across a number of different topics.

For instance, we have all been aware over the past year or two of the emergence of light field technology. It will be interesting to see how a camera, or a set of tools in a studio, will allow you to capture light fields -- which really means capturing everything that’s going on in that space around you -- and allowing you to manipulate it.

There’s a company called Lytro that we find really interesting. And they’ve already got a large, car-sized prototype. It’s a big boxy camera, and it can capture all the light field in front of it. It’s not yet a 360 system because that level of data is terabytes and terabytes of data. But I think we’re going to see a full 360 light-field capture from them.

For something like augmented reality, a company like Magic Leap is a very interesting organization.  They are working on technology that allows you to put a piece of computer generated (CG) content in front of you in the room, while also capturing information about the “actual” light in the room.  This way, as the virtual object moves, the light reacts around it appropriately, as if it were really there.

This means that a camera tech can look at the light in the room and say, “Okay, I’m going to change this content depending on what the light is.”

This is where we see the colliding of a 360 world, a VR world, an AR world, all coming together to meet a set of similar objectives around, “What’s going on in this world around me -- whether it’s virtual or real -- how do I get that data, and how do I apply it to what I’m looking at.”