*3.4.1 Reducing the need for streaming by synthesizing viewpoints*

Video-plus-depth data format supports synthesizing viewpoints without ordering and streaming new data. This in known from the sc. depth image based rendering (DIBR) approaches for stereoscopic (S3D) TV [15]. Using DIBR, stereoscopic image pairs can be formed in any desired baseline orientation. Viewpoints can also be synthesized to support 3D motion parallax, i.e. any small viewpoint changes around nominal viewpoints from which video and depth images are captured. In a telepresence solution, synthesizing viewpoints can thus be used for both reducing bitrates and avoiding possible latencies of a viewpoint-on-demand approach.

Applicable methods for synthesizing new viewpoints are virtual viewpoint generation in 3D (3D geometry calculations), which are also well supported by graphics processors for speeding up computations. Another way is used in [16, 17], where new viewpoints are formed by simple shifts of MFPs, generated also using videoplus-depth data. The latter approach is good at least if a graphics processor is not available, and if natural accommodation is supported by an MFP approach. Note

<sup>1</sup> https://www.qualcomm.com/media/documents/files/augmented-and-virtual-reality-the-first-waveof-5g-killer-apps.pdf

that MFPs can also be used for virtual viewpoint generation for normal stereoscopic pairs without the aim for supporting accommodation [16, 17].
