Technicolor is developing today’s solutions for tomorrow’s interactive media environments. Through our Immersive Laboratory, we offer professionals and end-users the solutions they need to extend their immersive experiences. We invest our expertise in computer graphics, computer vision and video processing applied to virtual reality, augmented reality and light-field domains.

Virtual Reality

The Virtual Reality (VR) Technical Area goal is to explore how VR may impact the future of media industry.
Current explorations from movies industry mainly focus on 360 videos, or on the contrary on full real-time experiences. Those very first steps are a good start to create true VR media experiences, where the user is at the center of an immersive and social experience. But current limitations such as isolation, lack of parallax, cinematographic grammar, video-sphere perception or even cyber sickness are some of the numerous challenges and opportunities that our researches address, to produce very innovative and high-level quality VR experiences.

Augmented Reality

The Augmented Reality (AR) Technical Area focuses on innovative technology for immersive blending and Mixed Reality (MR) applications.
Unlike standard AR, the goal of MR is to create one single mixed world where digital and physical realities seamlessly interact and blend together.  We believe that MR technology will enable amazing new user experiences.
MR requires dealing with spatial interactions between real and virtual objects, as well as realistic graphics rendering, with virtual shadows consistent with the real lighting. To achieve this, it is necessary to have a very good understanding of the real environment, which raises a lot of issues. Our research topics include 3D scene analysis, photometric analysis, camera pose tracking and context-aware rendering.

Real-Time Visual Effects

The Real-Time Visual Effects (VFX) Technical Area focuses on advanced and innovative technologies related to visual effects production. Its vision is to invent real-time VFX workflows to improve team-collaboration and iterations on shots and sequences, helping filmmakers to better tell their stories.

We provide novel solutions to speed-up and ease highly time-consuming and tedious processes for VFX artists and filmmakers. Our approach is driven by the new bridge between Film & TV and Games workflow, addressing problematics from facial rigging to real-time rendering passing through virtual camera management and asset repurposing.


Light Field Technologies

The Light Field Technologies Technical Area masters light properties by developing Light Field (LF) content acquisition and processing technologies. Light Field video brings sense of depth to video experience.
We believe future CE devices will acquire light field video through multi cameras architectures and will render light field video through AR/MR devices. Our technical area is engaged in developing new principles and technical solutions for next generation display & capture devices.
Our research topics cover Light Field real-time capture & calibration methods, video processing for depth map & point cloud generation, nanophotonic & nanofabrication for next generation devices.


Check out some of the amazing projects that our Scientists are working on to push the boundaries of immersive experience

Neus Sabater


Light Field Technologies

Motivated by new Light Field acquisition systems, this video showcases the potential benefit for post-production of Light Field editing. Based on specially tailored algorithms for Light Field processing, a new object removal method can manage large amounts of data and guarantees the consistency across various views without the conventional task of in-painting.

Patrick Perez


Real-time Visual Effects

Working in collaboration with the Max Plank Institute Saarbrucken in Germany, new automatic ways are explored to build a 3D face model that captures the morphology, expressions, and the texture of an individual by only using a single video of this person (including low quality interviews). This editable facial model can then be animated by artists or through a single‐view markerless capture of another person.

Nicolas Mollet


Virtual Reality

Looking to expand the virtual reality immersion from a single to multi-user experience in the same media, this video demonstrates the use of head-mounted displays and mixing 360 video with real-time objects. Users are embodied in the media as different characters with the ability to see each other, to adopt different points of view, and interact with the content.


Didier Doyen


Light Field Technologies

In order to create next generation Light Field content to enable new post-production features, this video demonstrates a real-time Light Field workflow from live captured to processed images. This new acquisition platform combines a 16 video camera rig with sophisticated processing to compute the large amount of data to feature synthetic aperture imaging and viewpoint change.