**4. Displays**

We provide in **Figure 11** results obtained with the RVS software on various display types—autostereoscopic or light field screen (Courtesy ETRO-VUB, Belgium), holographic stereograms [56], and head-mounted displays. Additional videos can be found at the following links: https://youtu.be/ikJb9JaaE54 (holographic stereogram) and https://youtu.be/vavw-TcbHf4 (head-mounted-display).

Displaying a dynamic scene in VR requires real-time view synthesis, preferably at 90 frames per second and at a minimum of 30 frames per second for each eye.

#### **Figure 11.**

*Instead of acquiring the 100 of views needed for the different kinds of display, RVS recreates them using four input images. (a) Autostereoscopic screen, (b) holographic stereogram, (c) head-mounted display.*


#### **Table 1.**

*The frame rate for view synthesis in VR depends on the number of input images and their resolution. The output images all have the resolution of oculus rift (i.e., 10801200 pixels). Those results have been obtained on a windows PC with Intel Xeon E5–2680@2.7GHz CPU and NVIDIA GTX 1080TI GPU.*

However, the processing time depends on the number of input images and their resolution – since their pixels form the mesh – resulting in different frame rates [16] (see **Table 1**). Using a NVIDIA GTX 1080TI GPU, around four input images at a full HD resolution can be processed to obtain a high visual quality while reaching realtime navigation.

In the case of the currently developed light field head-mounted display [57], the constraint is double—in addition to the real-time requirement, all the light rays reaching the user's pupils need to be displayed to make the eye accommodation possible on the close objects, that is, not only one image per eye but all the micro-parallax views around the eye position are rendered.

#### **4.1 Additional tools**

In this section, we provide references for additional tools, which are not directly involved in the view synthesis but are nevertheless useful to prepare a dataset.
