September 28, 2017

Technicolor Brings On the ICME.Salient360! Grand Challenge

The challenge at ICME’17 enabled a better understanding of how users watch a 360° image and scan content through head and eye movement.
  • The results provided researchers with a database of more than sixty 360° images and tools to perform similar experiments.

ICME is the flagship multimedia conference and expo sponsored by four IEEE societies – an international gathering of industry professionals discussing progress and latest developments in multimedia technologies and related fields. This year’s 18th annual ICME conference was held in Hong Kong, where the main theme was "The New Media Experience," and the enabling of next generation 3D/AR/VR experiences and applications.

Among the various sessions and events held around this theme was the exciting ICME.Salient360! Grand Challenge, co-organized by Technicolor and the University of Nantes, France. The challenge: understand how users watch a 360° image, analyzing how they scan this type of content through a combination of head and eye movement.

The Salient360! challenge had two key objectives:

  • Produce a dataset that would ensure easy and precise reproducible results and sustainable research in line with the principles of IEEE.
  • Set a first baseline for the taxonomy of several types of visual attention models, along with the correct methodology and ground-truth data to test each of them.

Among the 33 candidates who entered, a winner was chosen in each of three categories, and two special awards were given:

  • Head+Eye model algorithm winner: Mikhail Startsev, TUMunich - Germany
  • Head model algorithm winner: Pierre Lebreton, Zhejiang University - China
  • Scan-path model winner: Xiongkuo Min, SJTU - China
  • Best student award: Kao Zhang, Wuhan University - China
  • Special attendance award: Mikhail Startsev, TUMunich - Germany

The Grand Challenge ultimately provided researchers with a database of more than sixty 360° images, classified into five types of content – small indoor rooms, grand halls, naturescapes, cityscapes, and human faces – along with associated head map, eye map, and gaze path ground-truth data from real user experiments done at Technicolor and Nantes. There’s also a resulting methodology and tools to compute and perform similar experiments.

Find detailed information and instructions for accessing the dataset at Technicolor – innovation for the scientific community.

 

Citing the database

Yashas Rai, Patrick Le Callet and Philippe Guillotel. “Which saliency weighting for omni directional image quality assessment?” In Proceedings of the IEEE Ninth International Conference on Quality of Multimedia Experience (QoMEX’17). Erfurt, Germany, pp. 1-6, June 2017.

Yashas Rai, Jesús Gutiérrez, and Patrick Le Callet. 2017. “A Dataset of Head and Eye Movements for 360 Degree Images.“ In Proceedings of the 8th ACM on Multimedia Systems Conference (MMSys'17). ACM, New York, NY, USA, pp. 205-210, June 2017.