Philippe Guillotel

Distinguished Scientist

Immersive Lab Researchers

Since 1989 I am working with Technicolor Research & Innovation (Formerly Thomson Research). I am currently part of the Rennes Lab, France focusing on UX/user experiences (immersive experiences and user sensing), Human perception technologies (vision and haptics), video compression (2D and 3D) and AI/artificial intelligence (inc. robotics and machine/deep learning) for entertainment. I am graduated from the University of Rennes I with a Ph.D. in 2012 and a M.S. degree in 1986, and from the ENST-Br with a Dipl.-Ing. degree in 1988.

From 2008 to 2012, I was Lab manager for the Video Processing & Perception research Lab. (36 people, 6 PhDs, 2 post-docs, 11 Interns). During these years, I have been in charge of several research projects and teams (4 to 6 people), including also relationship with our business division, but also academics (INRIA, IRISA, Rennes University, Ecole Centrale de Lyon) and industrial research labs through collaborative projects and interns (PhD, master ...). I have also been involved in several European & French collaborative projects such as EUREKA95 (HDMAC), RACE dTTb (digital transmissions), ACTS HAMLET (MPEG2), ESPRIT/ROXY (Internet multicast), IST/Ozone (Ambient Intelligence), ITEA/HD4U (HDTV digital TV). Being Project leader for the RNRT/VISI (video over IP) project, and workpackage leader of the most recent FUI/FuturIm@ges (Future Image formats, 2D/3D) project. Now I am involved in an ITN Marie-curie European initiative dedicated to video compression. I was also member of a number of standards organizations such as MPEG, AFNOR, DVB and some groups (GdR-ISIS, Pôle de compétitivité "images & réseaux").

Since 2010, I am distinguished Scientist at Technicolor Research and Innovation in charge of New Users Experiences & Interfaces, Autonomous Scenes Capture & Modeling, Video Processing & Coding, and more recently Deep Learning and Digital Human.

What's new:

Opened Positions:
Senior Researchers (CDI) in Video Compression, Computer Generated Images (CGI), Computer Vision, Machine/Deep Learning
PhD Position on Deep Learning for characters animation, collaboration with academic laboratory (INRIA).
PhD Position on Deep Learning for video compression, collaboration with academic laboratory (INRIA).
Post-doc Position on Deep Learning for GCI applications

Research interests:

Image Processing, Video Compression, Video Streaming, Human Perception, Human Sensing and Physiological signals, Immersive Experiences, Man-Machine Interfaces, Interactivity, Haptics, Robotics and AI (inc. machine learning and deep learning).

Publications:

Main publications since 2012. More references from Google Scholar or ResearchGate.

Journals

  • J. Begaint, D. Thoreau, P. Guillotel, C. Guillemot, “Region-Based Prediction for Image Compression in the Cloud”. IEEE trans. Image Processing 27(4): 1835-1846 (2018).
  • Galvane, C. Lino, M. Christie, J. Fleureau, F. Servant, P. Guillotel, “Directing Cinematographic Drones”, ACM trans. on Graphics (TOG), Vol. 37, N° 3, August 2018. Also presented at a SIGGRAPH’18 talk.
  • H Becker, J Fleureau, P Guillotel, F Wendling, I Merlet, L. Albera, “Emotion recognition based on high-resolution EEG recordings and reconstructed brain sources”, IEEE trans. on Affective Computing, Vol. pp, N° 99, 2017.
  • Alain, C. Guillemot, D. Thoreau, P. Guillotel, “Scalable image coding based on epitomes”, IEEE trans. on Image Processing, Vol. 26, N° 8, 2017.
  • Turban, F. Urban, P. Guillotel, “Extrafoveal Video Extension for an Immersive Viewing Experience”, IEEE trans. on visualization and computer graphics, Vol. 23, N° 5, 2017.
  • Guillotel, F. Danieau, J. Fleureau, I. Rouxel, M. Christie, Q. Galvane, A. Jhala, R. Ronfard, “Introducing basic principles of haptic cinematography and editing”, The Eurographics Association, 2016.
  • Alain, C. Guillemot, D. Thoreau, P. Guillotel, "Non-Local Prediction Methods based on Linear Embedding for Video Compression", Signal Processing: Image Communication, Vol. 37, 2015.
  • Danieau, A. Lécuyer, P. Guillotel, J. Fleureau, N. Mollet, M. Christie, "A Kinesthetic Washout Filter for Force-feedback Rendering", IEEE Trans. on Haptic, Vol. 8, N° 1, 2015.
  • Alain, C. Guillemot, D. Thoreau, P. Guillotel, “Inter-prediction methods based on linear embedding for video compression”, Signal Processing: Image Communication, Vol. 37, 2015.
  • Danieau, J. Fleureau, P. Guillotel, N. Mollet, M. Christie, A. Lécuyer, "Toward Haptic Cinematography: Enhancing Movie Experience with Haptic Effects based on Cinematographic Camera Motions", IEEE Multimedia, Vol. 21, N° 2, 2014.
  • Chérigui, C. Guillemot (INRIA), D. Thoreau, P. Guillotel, P. Perez , "Correspondence Map-Aided Neighbor Embedding for Image Intra Prediction", IEEE trans. on Image Processing, Vol. 22, N° 3, 2013.
  • Guillotel, A. Aribuki, Y. Olivier, F. Urban , "Perceptual Video Coding Based on MB Classification and Rate-Distortion Optimization", Signal Processing: Image Communication - Special Issue on biologically inspired approaches for visual information processing and analysis, Vol. 28, N° 8, 2013.
  • Danieau, A. Lécuyer, P. Guillotel, J. Fleureau, N. Mollet, M. Christie, "Enhancing audiovisual experience with haptic feedback : a survey", IEEE trans. on Haptic, Vol. 6, N° 2, 2013.
  • Fleureau, P. Guillotel, Quan Huynh-Thu , "Physiological-Based Affect Event Detector for Entertainment Video Applications", IEEE trans. on Affective Computing, Vol. 3, N° 3, 2012.

Conferences/workshops

  • F. Danieau, P. Guillotel, O. Dumas, T. Lopez, B. Leroy, and N. Mollet. 2018. HFX studio: haptic editor for full-body immersive experiences. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology (VRST '18), New York, NY, USA, Article 37, 9 pages, Nov. 2018.
  • F. Hawary, G. Boisson, C. Guillemot, P. Guillotel, “Compressive 4D Light Field Reconstruction Using Orthogonal Frequency Selection”, in proceedings 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece. pp.1-5, October 2018.
  • J. Begaint, F. Galpin, P. Guillotel, C. Guillemot, “Region-based models for motion compensation in video compression”. In proceedings Picture Coding Symposium (PCS), pp.1-5. June 2018.
  • A. Costes, F. Danieau, J. Fleureau, P. Guillotel, F.  Argelaguet, A. Lecuyer , “KinesTouch: multi-dimensional texture rendering on tactile surface with force-feedback”, to be published EuroVR, October 2018.
  • A. Costes, F. Danieau, F.  Argelaguet, A. Lecuyer, P. Guillotel, “"Haptic material": A Holistic Approach for Haptic Texture Mapping”, in proceedings EuroHaptics, Pisa, Italy, pp.1-12, June 2018.
  • E. Callens, F. Danieau, A. Costes, P. Guillotel, “A tactile surface for digital sculpting in virtual environment”, in proceedings EuroHaptics, Pisa, Italy, pp.1-12, June 2018.
  • Rai, P. Le Callet, P. Guillotel, Which saliency weighting for omni directional image quality assessment?, 9th IEEE International Conference on Quality of Multimedia Experience (QoMEX), 2017.
  • Alain, C. Guillemot, D. Thoreau, P. Guillotel, “Learning clustering-based linear mappings for quantization noise removal”, IEEE International Conference on Image Processing (ICIP), 2016.
  • Fleureau, Y. Lefevre, F. Danieau, P. Guillotel, A. Costes, “Texture Rendering on a Tactile Surface Using Extended Elastic Images and Example-Based Audio Cues”, International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, 2016.
  • Galvane, J. Fleureau, F. L. Tariolle, and P. Guillotel, “Automated cinematography with unmanned aerial vehicles”, In Proceedings of the Eurographics Workshop on Intelligent Cinematography and Editing (WICED '16), 2016.
  • Fleureau, Q. Galvane, FL. Tariolle, P. Guillotel, “Generic drone control platform for autonomous capture of cinema scenes”, Proceedings of the 2nd ACM Workshop on Micro Aerial Vehicle Networks, Systems, and Applications for Civilian Use, 2016.
  • J Bégaint, D Thoreau, P Guillotel, M Türkan, “Locally-weighted template-matching based prediction for cloud-based image compression”, Proceedings IEEE Data Compression Conference (DCC), 2016.
  • Du, E. Shu, F. Tong, Y. Ge, L. Li, J. Qiu, P. Guillotel, J. Fleureau, F. Danieau, D. Muller, “Visualizing the emotional journey of a museum”, Proceedings of the 2016 EmoVis Conference on Emotion and Visualization, 2016.
  • Turkan, D. Thoreau, P. Guillotel, "Epitomic Image Factorization via Neighbor-Embedding", Proceedings IEEE International Conference on Image Processing (ICIP), 2015.
  • Alain, S. Cherigui, C. Guillemot, D. Thoreau, P. Guillotel, "Epitome Inpainting with In-loop Residue Coding for Image Compression", Proceedings IEEE International Conference on Image Processing (ICIP), 2014.
  • Turkan, D. Thoreau, P. Guillotel, "Iterated Neighbor-Embeddings For Image Super-Resolution", Proceedings IEEE International Conference on Image Processing (ICIP), 2014.
  • Fleureau, P. Guillotel and I. Orlac, "Affective Profiles of Movies and Operas Based on the Physiological Responses of the Audience", Proceedings Very Large Data Bases International Conference (VLDB) - Workshop on Personal Data Analytics in the Internet of Things, 2014.
  • Danieau, J. Fleureau, N. Mollet, M. Christie, P. Guillotel, M. Christie, A. Lécuyer, "Haptic Cinematography: Enhancing Multimedia Experience with Haptic Effects based on Camera", Proceedings ACM International Conference on Multimedia, 2013.
  • B. Nguyen, J. Fleureau, C. Chamaret , P. Guillotel, "Calibration-Free Gaze Tracking Using Particle Filter", Proceedings IEEE International Conference on Multimedia and Expo (ICME), 2013.
  • Alain, S. Cherigui, C. Guillemot, D. Thoreau, P. Guillotel, "Locally Linear Embedding methods for Inter Frame Coding", Proceedings IEEE International Conference on Image Processing (ICIP), 2013.
  • Turkan, D. Thoreau, P. Guillotel, "Optimized Neighbor Embeddings for Single-Image Super-Resolution", Proceedings IEEE International Conference on Image Processing (ICIP), 2013.
  • Guillemot, S. Chérigui, D. Thoreau, P. Guillotel, "K-NN Search Using Local Learning Based on Regression for Neighbor Embedding-based Image Prediction", Proceedings IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2013.
  • Danieau, J. Fleureau, J. Bernon, P. Guillotel, M. Christie, N. Mollet, A. Lécuyer, "H-Studio: An Authoring Tool for Adding Haptic and Motion Effects to Audiovisual Content", Proceedings ACM Symposium on User Interface Software and Technology (UIST), 2013
  • Fleureau, P. Guillotel, I. Orlac, "Affective Benchmarking of Movies Based on the Physiological Responses of a Real Audience", Proceedings International Conference on Affective Computing and Intelligent Interaction (ACII), 2012.
  • Danieau, J. Fleureau, A. Cabec, P. Kerbiriou, P. Guillotel, M. Christie, N. Mollet, A. Lécuyer, "A Framework for Enhancing Video Viewing Experience with Haptic Effects of Motion", Proceedings IEEE Haptics Symposium (Haptics), 2012.
  • Danieau, J. Fleureau, A. Cabec, P. Kerbiriou, P. Guillotel, M. Christie, N. Mollet, A. Lécuyer, - "Measuring the Quality of Haptic-Audio-Visual Experience", Proceedings IEEE Haptics Symposium - workshop on affective haptics, 2012.
  • Chérigui, C. Guillemot, D. Thoreau, P. Guillotel, P. Perez, "Hybrid template and block matching algorithm for image intra prediction", Proceedings IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2012.
  • Chérigui, C. Guillemot, D. Thoreau, P. Guillotel, P. Perez, "Map-aided locally linear embedding methods for image prediction", Proceedings IEEE International Conference on Image Processing (ICIP), 2012.
  • Turkan, D. Thoreau, P. Guillotel, "Self-Content Super-Resolution for Ultra-HD Up-Sampling", Proceedings European Conference on Visual Media Production (CVMP), 2012.
  • Danieau, J. Fleureau, P. Guillotel, M. Christie, N. Mollet, A. Lécuyer, "HapSeat: Producing motion sensation with multiple force-feedback embedded in a seat", Proceedings ACM Symposium on Virtual Reality Software and Technology (VRST), 2012.
  • Fleureau, C. Penet, P. Guillotel, C.-H. Demarty, "Electrodermal activity applied to violent scenes impact measurement and user profiling", Proceedings IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2012.

Demos

  • Danieau, J. Fleureau, N. Mollet, P. Guillotel, M. Christie, A. Lécuyer, "HapSeat: A Novel Approach to simulate motion in audiovisual experiences", International Conference and Exhibition on Computer Graphics and Interactive Techniques (SIGGRAPH) - Emerging Technologies, 2013.
  • Danieau, J. Fleureau, N. Mollet, M. Christie, P. Guillotel, M. Christie, A. Lécuyer, "HapSeat: A Novel Approach to Simulate Motion in a Consumer Environment", ACM International SIGCHI Conference on Human Factors in Computing Systems (CHI) - Interactive Technologies 2013.

Projects:

I have worked for Technicolor Research Labs in the following areas:

  • Research on Video Processing, Content Representation & Coding. With a special focus on MPEG standards (MPEG-2, MPEG-4 and extensions, HEVC). I have contributed to the specification and development of our business divisions products such as real-time video encoders, taking into account implementation constraints (hardware, software and IC). Those encoders have been recognized as worldwide most efficient broadcast encoders (with several awards). Research areas includes also content format representation for new formats (UHD, HDR, 3D,...), new coding schemes and paradigms (epitome, non-local predictions, deep learning, advanced scene models), adaptive encoding schemes and cognitive coding.
  • Haptic Interfaces for Multimedia Entertainment. We proposed a new area: the haptic cinematography, and we are conducting research on new interfaces and interactivity to provide the user with new and immersive experiences. It includes HapSeat, a special chair providing haptic effects when watching movies, Touchy an interface providing tactile sensations on touchscreens using a cursor and visual effects, KinesTouch a multi-dimensional texture rendering on tactile surface with force feedback, and some other tangible interfaces to create assets.
  • Human sensing in two different directions, aka Human vision and biological signals analysis. For the vision, we worked on human attention modeling and especially saliency map computation, subjective video quality metrics and re-framing applications (a public data-base of eye-tracking 360° images is available). The Technicolor saliency computation model is published in several journal papers. The subjective metrics have also been used for the optimization of video encoders. For the biological sensing, we have developed a system to capture spectator’s GSR in movie theaters, and then analyze the signals to infer movies affective profile. We have also studied how EEG signals can be used for detecting emotional valence & arousal when watching movie content (a public data-base is available).
  • Robotics, we have developed a complete system to control cinematographic UAVs (Drones). A dedicated interface allows to specify either a camera path or a on screen shooting view, for 1 to 3 drones and several moving targets (actors). Artificial intelligence is then used to compute, dynamically, the best path considering safety, obstacles, expected viewpoint, actors position, drones capabilities, cinematographic rules and positions of the other drones.
  • Video Streaming over Internet and other distribution systems (based on MPEG-2 TS and IP protocols). I have been active in the writing of the DVB specification for the deployment of the Digital Terrestrial Television in Europe. I have studied adaptive video streaming over IP networks in 1998-2000, using scalable video coding approaches and modified RTP protocols.
  • Digital modulation & error protection for robust delivery of video. I made some work in this area that lead to the development of joint source channel coding schemes.

I am focusing now on next generation Video Content Representation & Coding (Cognitive Deep Compression), Human Factors (Perception, Vision, Emotion), New Man-Machine Interfaces to improve the user experience (Haptics, Tangible), The Digital Human (modeling, animation, interactivity using Deep Learning when relevant).

We use cookies on our website to support technical features that enhance your user experience.

We also use analytics & advertising services. To opt-out click for more information.