For my Ph.D., I worked three years for a French research project (RNRT/CNRS) with the car manufacturer Renault to put in place new tools to index, compress and watermark 3D CAD objects. My thesis was about computer vision and 2D/3D object analysis. I proposed a new 2D shape descriptor to retrieve 3D objects based on picture or drawing and a new color descriptor to find similar pictures in large databases.
After these first experiments, I was hired by Orange Labs Rennes in a postdoc position to work on inline 3D cartography project during 3 years. The goal was to build a realistic 3D representation of Paris with all the buildings, the monuments and the ground which had to be close to the reality and should be displayed inline in a web-browser. I started to work on this project as postdoc during 18 months and I stayed 18 other months (subcontractor) as researcher then as research director for fifty engineer project.
During this project, I proposed a new procedural model of 3D facade based on a database of frontage elements. I have developed several picture analysis tools to automatically build the procedural models and the database based on a set of pictures of buildings. This work has been used to describe and build all the facades of Paris and presented at a SIGGRAPH conference. Based on the same concept, I have developed an automatic tool to analyze the 3D models of the roofs and extract the main characteristics. These data could be sent to rebuild the roof according of the view-point of the user based on the scalability of the procedural models. To increase the quality of the ground textures, I have proposed to create a large vector picture of the Paris floor. This picture has been made by hand based on 40 layers: street, traffic marks, sidewalk, grass, tree grate… I created tools to edit the layers, compress the data and render inline the ground texture per the user position and point of view.
In 2009, I started to work for Canon Research Center France as subcontractor in video compression. During two years, I proposed the new architecture and modified an MPEG 4-AVC encoder and decoder to implement the scalable extension: SVC. Particularly, I optimized the AVC and SVC codecs to increase the performance based on parallel processing. To increase the robustness of the decoders, I put in place error detection processes and several error concealment algorithms.
During the creation of the new video compression standard HEVC, I made, jointly with the Canon research team, several contributions to the standard. We created new methods to update the reference pictures and select the motion vector predictors.
In parallel with the creation of the new HEVC standard, Canon Research Center France wanted its own implementation of the HEVC codec. During two years, I was in charge of the architecture and the development of a HEVC encoder and decoder alone from scratch. I defined the architecture and I developed alone an entire HEVC codec to guaranty real time 4K decoding (50 FPS) and fast 4K encoding (20 FPS).
In 2012, I created my own company to propose my expertise directly to the companies interested in codecs, video compression and development. During three years, I worked this way and this new adventure increased my network, my capacity to communicate and find new customers, my understanding of company administration and my skills for managing large projects from the beginning to the end. In this context, I joined Technicolor in 2013 to work on the ATSC 3.0 project in video compression. After six months as subcontractor, I was hired to integrate the team as a full-time employee with Technicolor.
For three years, I have been working on the ATSC 3.0 team (I&R/ISL/ATS). The goal of this project is to build, propose, demonstrate, defend and ensure the integration of the contributions of our research teams into the next-generation American broadcast standard: ATSC 3.0. In this team, I’m the main contributor to define the architecture and to develop our testbed platform that allows simulating a real ATSC 3.0 broadcast chain.
Our testbed has been integrated into Sinclair's experimental OFDM transmission system in Baltimore, Maryland. The impact of this deployment is that broadcasters will be able to deliver the highest quality content, inclusive of 4K live broadcast in a simultaneous transmission to consumers both at home and on-the-go. I have presented our testbed in several internal demonstrations and thrice during the Nab Show (2014, 2015 and 2016). During these demonstrations, I have presented our solutions to the main American broadcasters which demonstrated in live the advantage of Technicolor’s technology. Our project has been selected as one of the five “Outstanding Technicolor Engineering Projects” of the Technicolor Engineering Awards 2014.
Since June 2015, I have integrated in our system several HDR distribution and production solutions. This allows us to compare various methods in competition to the solution proposed by Technicolor for various HDR and SDR inputs. At this point, we have created a live HDR end to end production and distribution chain. From a HDR or a SDR input, we linearize the signal with one inverse transform if the signal is HDR (PQ-1, HLG-1, and Slog3-1) or we perform a live inverse tome mapping operator for SDR signal. For the distribution, the HDR frame are quantized in 10 bits with various methods: PQ, HLG or Technicolor’s Advanced HDR (Prime Single), encoded, packetized in MMT and distributed. The receiver decodes the frames and can display directly on one SDR TV or performs the inverse transform to restore the HDR signal on the HDR TV. Some demonstrations have been organized to present our results to the US broadcasters and to the main technology companies (NAB Futures 2015, CES 2016, HPA 2016, and NAB Show 2016). These presentations have proved the validity of the Technicolor solutions and convinced or put the question to the broadcasters to use or study our solutions. These works have been made in close collaboration with IP&L and Technology Licensing. In January 2018, ATSC have adopted the HDR technicolor solutions in the standard.
Since November 2016, I joined the new compression ecosystems technical area (R&I/ISL/CES) to develop a Point cloud compression solution. The Virtual reality and the point cloud compression are new topics in MPEG and a specialized team has been created in R&I to answer this question. Our main work consisted of developping a reference software and we proposed our technologies to MPEG-3DCG. In this team, I am in charge of the software architecture of the whole system (reference software + demonstrator). In February 2017, the MPEG PCC renderer was chosen among others to be the reference for subjective tests for the PCC Call for Proposal answer. Since, I am in charge as a software coordinator to maintain the official MPEG PCC renderer.
In October 2017, our team answered to the MPEG PCC call for proposal and our solution took the second place. Since, we are working in the MPEG PCC group to propose our tools in the PCC test model. During the January 2018 MPEG meeting, I have been selected by the MPEG PCC group as reference software coordinator for the Test model 2.
My skills have been recognized by the Technicolor career path comity. I was hired three years ago as a software developer and became one year ago a senior engineer and architect in last June. Since June 2017, my experiences and skills have been recognized by the Technicolor Fellowship Network community as Associate Member.