**4. Light field**

The Light Field (LF) expresses the radiance as a function of position and direction in regions of free space [18, 19]. In other words, it represents the number of light rays within a specific area. The capturing of all light rays in a scene allows generating a perspective view from any position. Therefore, LF technology can be effectively used in many applications: from accurate passive depth estimation to change of viewpoint or view synthesis, which can be useful in augmented reality content capture or movie postproduction.

The capturing of an LF is a quite complex procedure from the technological point of view; in fact, the light field represents rays with varying positions and angles, and, to obtain this information, it is necessary to record the scene from multiple positions.

To this aim, different techniques can be adopted: the use of camera arrays, camera gantry, or plenoptic cameras*.* By spatially locating multiple cameras into an array, the entire LF may be collected at once. This approach is used up to a planar array of 128 cameras. A different system is based on moving a single camera while capturing a stationary scene to measure the incident light rays. The basic idea behind the plenoptic imaging systems is the use of a micro-lens array positioned on the focal point of the camera lens, in front of the imaging sensor as shown in **Figure 3**.

This system allows recording multiple views of a scene in a single shot, thus reducing issues related to calibration and camera synchronization. The micro-lens array records the information on the incident light direction at different positions, i.e., it records the LF. The availability of low-cost acquisition devices allows novel applications for these imaging systems. The exploitation of the LF redundancy in the post-processing and editing phases brings photographers and art directors new opportunities. One of the main issues of this technology is related to the rendering modality. Many efforts are being devoted to the design of dedicated displays (e.g., an array of video projectors aimed at a lenticular sheet, 3D Displays, up to recently proposed tensor displays) or devices (e.g., head-mounted systems for virtual reality applications). However, up to now, these systems are very expensive and there are many challenges to be addressed (e.g., the reduced angular resolution of an LF cinema). The simplest and cheapest solution is the rendering of the LF data on conventional 2D screens. Since the LF allows rendering the scene from several

**Figure 3.** *Light field vs. 2D imaging system.*

points of view and focus points, the questions of what and how to render the scene on a 2D display arise. To solve this issue, recent works have been devoted to an in-deep analysis of the impact of different visualization techniques of LF images on a 2D display [20].


**Table 2.** *Annotated light field dataset.*

### *QoE and Immersive Media: A New Challenge DOI: http://dx.doi.org/10.5772/intechopen.99973*

The research community is also trying to define quality metrics and test datasets specifically designed for LF data. In **Table 2**, a list of available LF datasets, annotated with the corresponding subjective scores, is reported.

In the direction of the definition and assessment of quality, some efforts have been made. Methodologies for performing subjective quality assessment experiments were investigated in [20] and the impact of compression systems in [21]. The study was conducted by designing the SMART LF image quality dataset consisting of source images, compressed images, and subjective scores. The impact of the compression, reconstruction, and visualization phases was studied in [22] together with the definition of the Dense Light Fields dataset. The applicability and perceptual impact of existing and specifically designed compression techniques have been studied in [23]. A tentative to assess the subjective quality of experience of decoded LF images was performed in [24]. A reduced reference LF image quality metric based on the relationship between the distortion of the estimated depth map and the LF image quality was presented in [25]. Full reference metrics based on multiorder derived characteristics (MDFM) [26] and EPI [27] were presented. More recently, the log-Gabor feature-based light field coherence (LGF-LFC) feature has been proposed for a full reference metric in [3].
