January 06, 2017

Technicolor’s Nicolas Mollet Describes How 360-Degree Immersive Video is Evolving to Transform the Art and Science of Entertainment

  • 360-degree immersive video will be the next frontier in entertainment, creating virtual reality experiences, but today those experiences come with some limitations.
  • One of the biggest limitations of today’s immersive 3D video experiences is the lack of embodiment: the viewer can see the 3D, 360-degree imagery, but has no sense of being physically present in that environment.
  • Another big limitation is the isolation of the user in immersive videos, while social aspects are an opportunity VR may offer.
  • Technicolor has demonstrated how these limitations can be overcome by creating and embedding an avatar that reflects the viewer’s movements in the real world, and by creating virtual objects the viewer can interact with.

360-degree immersive video will be the next frontier in home entertainment, creating virtual reality experiences, but today those experiences comes with some limitations. Nicolas Mollet, Principal Scientist with the Research and Innovation Group at Technicolor, leads a team of scientists working on virtual reality.

Here he explains some of the things his team is doing to overcome these limitations and expand the capabilities of virtual reality, which he demonstrated at CES 2017.


Nicolas, tell me about the projects youre working on and the issues you are trying to explore and advance.

Mollet: Right now we are looking beyond 360-degree video. Today, most of our production is focused on 360-degree video, but there is more we can do. For example, how can we recover the parallax inside those videos?


What is parallax?

Mollet: When youre inside a 360-degree video you are at the central point of a sphere where the acquisition camera was located. This means that, when you move your head to the side or up and down, your view of the objects in the video does not change, because you are still at the central point of the sphere. If we can recover the parallax we can recreate the sensation of moving slightly in front of the object. That is one of the projects we are working on.


So that means if I move forward the focus of the objects changes in relation to where I am?

Mollet: Parallax is not about the focus, it’s the ability to see an object in its entirety: to see not only a 3D image, but to be able to see the side of an object when you move your head slightly, for example.

We don’t want to create a totally free viewpoint where we can move all around an object. That will require a lot of development, and is some ways off. We want to concentrate on what can be achieved in a reasonable timeframe, just adding a little bit of parallax.


Got you. So parallax is one thing. What is the second thing?

Mollet: Another project we are working on is also looking at what more we can do with 360-degree videos. The experience of a 360-degree video is fairly limited, because you’re wearing a helmet and you feel rather isolated from it. So we are working to give you a role inside the 360-degree video. We want to embody you inside the content.


One of the critiques of VR is that some people get disoriented because they cannot see themselves. They are only abstractly involved in the experience. How are you able to address that embodiment issue?

Mollet: That is exactly the point. As soon as you are inside the virtual environment you don’t perceive anything about your own real world. So when, for example, you move your arms, you are conscious of doing so but you don’t see that movement reflected in the virtual world.

In some of those 360 videos you have a body and that is the worst case because typically this body is moving independent of yours, making you feel stress, discomfort, and nausea.

The aim of embodiment is to enable the user to perceive their body and to feel comfortable inside the 360-degree experience. As soon as you are able to see yourself, to feel present inside the video, you are much less likely to feel nauseous.


So part of that is being able to see your body: your hands, your arms? And that makes a difference to whether or not you feel comfortable in the virtual environment?

Mollet:  That’s right. You have a lot of internal sensors that contribute to your perception of the real world.  Obviously you can see, you can hear, you can touch. But your body is a very complex system; there are many ways in which it contributes to your perception of the external environment.

The virtual world confuses those sensors. For example, even your internal organs are able to sense how your body is moving. You know without seeing that your head is at a certain point in space because you feel that through your muscles. It is not something you are conscious of; it is simply the way your body works. As soon as you create conflict with those internal sensors you feel unwell. In the best case you just feel uncomfortable, but in the worst case, you feel dizzy.


Tell me how you have addressed this in the technology that you were showcasing at CES 2017.

Mollet: At CES 2017 we are using a headset combined with an external device that allowed us to sense the user and to track the position and motion of his extremities. We used some software to recreate those extremities in the virtual world and by doing that we were able to recreate his body inside the 360-degree video.

We have worked to blend this virtual body with the 360-degree content. That is not a trivial exercise, because you cannot just put a gaming avatar inside a video. You have to make it blend perfectly to create the right experience.

We’ve done that. Now, you are able to see yourself as a body in the experience: a character in the story or a ghost inside the experience. But the important thing is that, as soon as you have a virtual body, you want it to be able to interact with something.

So within the video we create an extra layer of real-time interactive objects and you are able to interact with those objects. We’re not trying to create an interactive story, just to make you feel more immersed, by making your environment, the content around you, react to what you do.

And finally, we are able to incorporate several users, creating a social virtual reality experience. Rather than having several people in front of the movie, there are several people inside the movie, with multiple points of view. All of the viewers can be embedded with different points of view or — and this is the most interesting aspect — they may be able to see themselves inside the experience and interact with each other.


So they can see themselves and see other people who are active observers inside the experience?

Mollet: That is exactly right.


What were some of the technical challenges you had to overcome to bring us this demonstration?

Mollet: We built this whole experience from scratch to demonstrate the concept because it is rather difficult to explain. We used a game engine to render the 360-degree video. Inside this game engine we are running the movies in real time and we had to do all of the synchronization and animation of real-time objects on top of the movies, because the media needs to play on the TV, on a set-top box with a 360-degree player, or on more advanced equipment.

We had to develop the full experience from scratch. We implemented a lot of technical bricks to achieve this goal: network connections, video blending, shaders for perfect blending, occlusion solutions, etc.

We are still a long way from being able to occlude automatically an object in a video in real-time. Technically, the video is created on the surface of a sphere. We put a camera at the center of this sphere and then all of the real-time objects are created between the camera and the sphere. And we address the challenge to handle the occlusion when the object is supposed to be in the middle of a spaceship, for example, while that spaceship has been created on the surface of a sphere.


So you are creating a 360-degree video, you are superimposing some kind of animation that is game-like and you are bringing in the ability to embody several participants?

Mollet: We have created this experience to demonstrate the concept, because when you experience it, it is clear, but it’s very difficult to explain.

Now that our customers, the studios, ask what more we can do, they are asking themselves those same questions. Could you think about embodiment? Could you think about social?

We are in a good position to do both these because we have already demonstrated them in advance with this new demonstration. So now we can focus on the most important things: providing a good workflow, providing authoring tools that enable us to efficiently produce and productize hybrid media.


Hybrid media is interesting because there are no rules for it. How long did it take you to build this experience?

Mollet: We were a team of researchers and it took us six months to create the video along with the artistic component and the technical component.


And that was just for a few minutes of video?

Mollet: Yes, the total experience is less than 10 minutes.


So what’s next?

Mollet: If we want to maintain a leading role in movie production we have to productize hybrid media. We have to deal with parallax, as I mentioned at the beginning. Even if we improve the video, the observer is still at the center.

There is also the question of embodiment. Should you see yourself, or an avatar? That is something we are working on. And we also have to work on the authoring tools, the pipeline.

In terms of experience, we also want to make progress on socialization, creating high level multiple points of view. We will create a new experience based on the same concept. We will shoot a real movie with multiple points of view and we will show different embodiments and ways to perceive the others.

For the demonstration at CES 2017 we produced a CGI movie, and some people think we can only do this in a CGI movie, but thats not true at all. Very soon we will do this with a movie shot with rigs of cameras and with real actors, and we plan to add parallax in the same time.