2. 3D content generation from holographic data

The ultimate goal of digital holography is to build a system for 3D scene capture, transmission of captured data and 3D optical display. Although based on clear theoretical grounds given by Eq. (1), it is hard to fulfil this task because of limitations encountered at digital implementation of the holographic principle due to the discrete nature of photo-sensors and display devices, their small size and low spatial resolution. The modern devices are characterized with a pixel periods from 1 to 20 μm and active areas from 1 up to 2–3 cm2 . 3D content generation from optically captured digital holograms should include three steps: (i) multi-view capture by a set of cameras or by sequential recording from different perspectives (Figure 3); (ii) conversion of the captured data to a display data format; (iii) feeding the data to a display from many SLMs to enlarge the viewing angle (Figure 3).

A key problem of digital holographic capture and imaging is the very small value of the maximum angle between the object and reference beams which satisfies sampling requirement

form of a real-valued 2D matrix of recorded intensity according to Eq. (1). In this branch, different techniques have emerged for the two decades of existence of digital holography as (i) digital holographic microscopy with a plane wave or a point-source illumination [10, 11]; (ii) optical diffraction tomography with multi-directional phase-shifting holographic capture [12]; (iii) infrared holography in the long wavelength region for capture of large objects [13]; (iv) determination of sizes and locations and tracking of particles in a 3D volume [14]. Feature recognition based on digital holography has been proposed [15]. A lot of efforts were dedi-

The branch dedicated to synthesis of a light field comprises methods for computer generation of holograms [17] which are fed to some kind of SLMs for optical reconstruction of the images they encode. Computer-generated holograms (CGHs) are used for holographic displays [18], holographic projection [19] and diffractive optical elements [20]. In principle, the CGHs provide the only means to generate light fields for virtual 3D objects. A CGH is a real-valued 2D

Both branches are closely related to the task of direct transfer of optically captured digital holograms to a holographic display. To realize the chain 3D capture—data transfer—holographic display digital holography requires coherent light and 2D optical sensors and SLMs with high resolution and big apertures. The 4D imaging, when the time coordinate, is added to the data further aggravates the task of building a holographic display because the latter needs much higher resolution and much more information capacity than other types of 3D displays. Generally speaking, there are two ways for 3D content generation for holographic displays: (i) conversion of optically captured holographic data; (ii) computer generation of holograms. Furthermore, we discuss these two main tendencies—direct feeding of optically recorded digital holograms to a holographic display and computer generation of interference fringes from directional, depth and colour information about the 3D objects—on the basis of our

The ultimate goal of digital holography is to build a system for 3D scene capture, transmission of captured data and 3D optical display. Although based on clear theoretical grounds given by Eq. (1), it is hard to fulfil this task because of limitations encountered at digital implementation of the holographic principle due to the discrete nature of photo-sensors and display devices, their small size and low spatial resolution. The modern devices are characterized with a pixel

optically captured digital holograms should include three steps: (i) multi-view capture by a set of cameras or by sequential recording from different perspectives (Figure 3); (ii) conversion of the captured data to a display data format; (iii) feeding the data to a display from many SLMs

A key problem of digital holographic capture and imaging is the very small value of the maximum angle between the object and reference beams which satisfies sampling requirement

. 3D content generation from

cated to instrumental or software solutions of the twin image problem [16].

matrix of amplitude or phase data; it may have also binary representation.

experience in forming the holographic content.

186 Holographic Materials and Optical Systems

to enlarge the viewing angle (Figure 3).

2. 3D content generation from holographic data

periods from 1 to 20 μm and active areas from 1 up to 2–3 cm2

Figure 3. Schematic diagram of multi-view holographic capture by many digital photo-sensors and multi-view optical reconstruction after mapping the 3D contents to a display set of SLMs.

for the spatial frequency at current low spatial resolution of electrically addressable devices. In theory, the photo-sensor must resolve the fringe pattern formed by interference of the waves scattered from all object points with the reference wave. The holographic display should support some space-bandwidth product with regard to limitations of the human visual system. The maximum angle, θmax, between the reference and the object beams that satisfies the Whittaker-Shannon sampling requirement for a wavelength λc,d, and a pixel period Δc,d, where 'c' and 'd' are attributed to capture and display devices, is found from

$$\sin\left(\frac{\Theta\_{\text{max}}}{2}\right) = \frac{\lambda\_{c,d}}{4\Delta\_{c,d}}\tag{2}$$

The limitation set by Eq. (2) means capture of small objects at a large distance from the camera and a small viewing angle at optical reconstruction. If the object lateral size D is much greater than the sensor size, the minimum distance between the object and the photo-sensor is about zmin ¼ DΔc=λc. Usage of coherent light seriously restricts the viewing angle of the holographic display and the size of the reconstructed image [21]. A planar configuration of many SLMs allows for visualization of larger objects [22] but the problem with the small viewing angle remains. Enlarging the viewing angle for pixelated SLMs by using higher diffraction orders and spatial filtering is proposed in Ref. [23]. Under coherent illumination, a circular arrangement of SLMs puts less severe requirements to the space-bandwidth product of the display and supports full-parallax binocular vision at an increased viewing angle. Different circular configurations have been recently proposed [24–26].

Effective operation of the holographic imaging system requires maintaining the consistent flow of data through capture, transmission and display blocks. So the other problem of 3D content generation is non-trivial mapping of the data from 3D holographic capture with nonoverlapping camera apertures to arbitrary configuration of display devices (Figure 3). In the general case, the wavelength, the pixel period and the pixel number differ at capture and display sides, i.e. λc≠λd, Δc≠Δd, Nc≠Nd. This alters the reconstruction distance and the lateral and longitudinal dimensions of the reconstructed volume [27]. Another difficulty arises from the requirement the set of digital holograms captured for multiple views of the 3D object to be consistent with the display configuration built from many SLMs. Although both the amplitude and the phase can be retrieved from the captured holograms, the type of the SLM entails encoding the holographic data only as amplitude or as phase information. To illustrate the non-triviality of 3D content generation from optically recorded digital holograms, we consider two characteristic examples from our experience with data mapping. The detailed description of the capture and display systems is given in Refs. [25, 28]. Here, we focus only on data transfer from the holograms to SLMs.

In the first example, the capture parameters substantially differed from the parameters on the display side. The mapping was done for a circular holographic display under visible light illumination when the input data were extracted from a set of holograms recorded at 10.6 μm [29]. The interest in capturing holograms in the long wavelength infrared region is due to the shorter recording distance, larger viewing angle and less stringent requirements to stability of the system. The object was a bronze reproduction of the Benvenuto Cellini Perseus sculpture with a height of 33 cm [28]—a large object for digital holography. Nine off-axis digital holograms were captured by rotating the object with an angular step of 3° using an ASi (amorphous silicon) thermal camera with Nc ¼ nxc · nyc ¼ 640 · 480 pixels and Δ<sup>c</sup> ¼ 25μm. The object beam interfered with a spherical reference wave given in paraxial approximation in the plane of the photo-sensor as Rcðxc, yc;rcÞ ¼ exp <sup>½</sup>−jπðx<sup>2</sup> <sup>c</sup> <sup>þ</sup> <sup>y</sup><sup>2</sup> <sup>c</sup> Þ=ðλcrcÞ�; the radius rc ¼ zo=2 was equal to the half of the distance zo ¼ 0.88 m between the object and the photo-sensor. The nine phase-only SLMs in the display set-up were characterized by Nxd · Nyd ¼ 1920 x 1080 pixels, a pixel period Δ<sup>d</sup> ¼8 μm and phase modulation from 0 to 2π; the illuminating wavelength was λ<sup>d</sup> ¼ 0.532 μm. The SLMs, arranged in a circular configuration, were illuminated with a single astigmatic expanding wave by means of a cone mirror whose apex was at a distance Ds from the point light source positioned on the line of the cone mirror axis [25]:

$$\mathcal{W}(\mathbf{x}\_d, y\_d) = \exp\left(\because \frac{2\pi}{\lambda\_d} \frac{\mathbf{x}\_d^2}{D\_h}\right) \exp\left[\because \frac{\pi}{\lambda\_d} \frac{\left(y + h\_{\rm SLM}/2\right)^2}{D\_h + D\_s}\right] \tag{3}$$

where Dh is the distance from the cone mirror axis to the SLM centres and hSLM is the SLM height. The reconstructed images were combined above the cone mirror by a slight tilt of the SLMs at a distance of 35 cm from each SLM. A linear stretching of images with a coefficient, m ¼ Δd=Δc, occurs. A reference wave at a different wavelength, λd, and a different radius, rrec, yields a new reconstruction distance zi [30]:

$$\frac{1}{z\_i} = \frac{1}{r\_{rec}} \pm \frac{\mu}{m^2} \left(\frac{1}{z\_o} - \frac{1}{r\_c}\right) \tag{4}$$

and the reconstructed image undergoes longitudinal and lateral magnifications:

$$M\_{\rm long} = \frac{d\mathbf{z}\_i}{d\mathbf{z}\_o} = \left(\frac{\mathbf{z}\_i}{\mathbf{z}\_o}\right)^2 \frac{\mu}{m^2}, \quad M\_{\rm lat} = \frac{\mu}{m} \frac{\mathbf{z}\_i}{\mathbf{z}\_o} \tag{5}$$

where μ ¼ λc=λd. Generation of 3D contents for each SLM was based on Eq. (1) and included: (i) retrieval of the phase ϕOðx, yÞ of the object field from the captured holograms; (ii) compensation for the non-plane wave illumination of the SLMs; (iii) adjustment of the reconstruction position to the mandatory distance of 35 cm. The phase ϕOðx, yÞ was retrieved by filtering in the spatial frequency domain to extract only the real image in Eq. (1) and to suppress the zeroorder term and by multiplying the filter output with the numerical reference wave R� <sup>c</sup> <sup>ð</sup>xd, yd,rrec<sup>Þ</sup> taken with a new radius, rrec <sup>¼</sup> zi=<sup>2</sup> <sup>¼</sup> zom<sup>2</sup>=<sup>μ</sup> in the display coordinates xd ¼ lΔd, yd ¼ nΔd; l ¼ 1, 2::Nxd, n ¼ 1, 2::Nyd. The amplitude was discarded in the object field and it became Hðxd, ydÞ ¼ exp ½jϕOðxd, ydÞ�. To compensate for the non-symmetrical illumination, the phase of Hðxd, ydÞW� ðxd, ydÞ was fed to each SLM. The holograms were placed at the centres of the SLMs as depicted in Figure 4. Shining Wðxd, ydÞ on the SLMs with HW represented as a phase creates reconstruction at the plane wave illumination when rrec ! ∞ in Eq. (4). The reconstruction distance in this case is zi ¼ 1.78 m. The reconstruction is stretched longitudinally and squeezed laterally at Mlong ¼ 2.04 and Mlat ¼0.32. A digital converging lens, <sup>L</sup>1ðxd, ydÞ ¼ exp <sup>f</sup>jð2π=λdÞðx<sup>2</sup> <sup>d</sup> <sup>þ</sup> <sup>y</sup><sup>2</sup> <sup>d</sup>Þ=ρ1g with a focal distance of ρ<sup>1</sup> ¼ 43.5 cm was introduced to adjust the reconstruction distance to 35 cm. The image was separated from the strong non-diffracted beam caused by the pixelated nature of the SLMs by multiplying the array with the holographic data with a tilted plane wave, <sup>P</sup>ðydÞ ¼ exp <sup>ð</sup>j2πyd sin <sup>θ</sup>t=λdÞ, where <sup>θ</sup><sup>t</sup> <sup>¼</sup> <sup>2</sup><sup>∘</sup> . The phase of W� ðxd, ydÞ was attached to the pixels outside the hologram plus the phase of the lens <sup>L</sup>2ðxd, ydÞ ¼ exp <sup>f</sup>jð2π=λdÞðx<sup>2</sup> <sup>d</sup> <sup>þ</sup> <sup>y</sup><sup>2</sup> <sup>d</sup>Þ=ρ2g with ρ<sup>2</sup> ¼ 35 cm to gather the light reflected from these pixels below the reconstructed image. The arrangement of the wave fields on the surface of each SLM is depicted in Figure 4 (actually, the phases of these fields were fed to each SLM). The processing allowed for combining the images created by all SLMs into a single reconstruction which could be viewed smoothly within an increased viewing angle of 24°. The video with the reconstruction can be found in reference [29]. The most remarkable fact of this data mapping from far infrared capture to a circular display was that we achieved more or less same longitudinal and lateral magnifications Mlong ¼ 0.078 and Mlat ¼0.062 of the reconstruction volume.

encoding the holographic data only as amplitude or as phase information. To illustrate the non-triviality of 3D content generation from optically recorded digital holograms, we consider two characteristic examples from our experience with data mapping. The detailed description of the capture and display systems is given in Refs. [25, 28]. Here, we focus only on data

In the first example, the capture parameters substantially differed from the parameters on the display side. The mapping was done for a circular holographic display under visible light illumination when the input data were extracted from a set of holograms recorded at 10.6 μm [29]. The interest in capturing holograms in the long wavelength infrared region is due to the shorter recording distance, larger viewing angle and less stringent requirements to stability of the system. The object was a bronze reproduction of the Benvenuto Cellini Perseus sculpture with a height of 33 cm [28]—a large object for digital holography. Nine off-axis digital holograms were captured by rotating the object with an angular step of 3° using an ASi (amorphous silicon) thermal camera with Nc ¼ nxc · nyc ¼ 640 · 480 pixels and Δ<sup>c</sup> ¼ 25μm. The object beam interfered with a spherical reference wave given in paraxial approximation in the

equal to the half of the distance zo ¼ 0.88 m between the object and the photo-sensor. The nine phase-only SLMs in the display set-up were characterized by Nxd · Nyd ¼ 1920 x 1080 pixels, a pixel period Δ<sup>d</sup> ¼8 μm and phase modulation from 0 to 2π; the illuminating wavelength was λ<sup>d</sup> ¼ 0.532 μm. The SLMs, arranged in a circular configuration, were illuminated with a single astigmatic expanding wave by means of a cone mirror whose apex was at a distance Ds from

where Dh is the distance from the cone mirror axis to the SLM centres and hSLM is the SLM height. The reconstructed images were combined above the cone mirror by a slight tilt of the SLMs at a distance of 35 cm from each SLM. A linear stretching of images with a coefficient, m ¼ Δd=Δc, occurs. A reference wave at a different wavelength, λd, and a different radius, rrec,

exp −j

1 zo − 1 rc � �

μ

where μ ¼ λc=λd. Generation of 3D contents for each SLM was based on Eq. (1) and included: (i) retrieval of the phase ϕOðx, yÞ of the object field from the captured holograms; (ii) compensation for the non-plane wave illumination of the SLMs; (iii) adjustment of the reconstruction

<sup>m</sup><sup>2</sup> , Mlat <sup>¼</sup> <sup>μ</sup>

m zi zo

<sup>c</sup> <sup>þ</sup> <sup>y</sup><sup>2</sup>

π λd ðy þ hSLM=2Þ

" #

Dh þ Ds

2

<sup>c</sup> Þ=ðλcrcÞ�; the radius rc ¼ zo=2 was

(3)

(4)

(5)

transfer from the holograms to SLMs.

188 Holographic Materials and Optical Systems

plane of the photo-sensor as Rcðxc, yc;rcÞ ¼ exp <sup>½</sup>−jπðx<sup>2</sup>

the point light source positioned on the line of the cone mirror axis [25]:

1 zi ¼ 1 rrec � μ m<sup>2</sup>

Mlong <sup>¼</sup> dzi dzo

and the reconstructed image undergoes longitudinal and lateral magnifications:

<sup>¼</sup> zi zo � �<sup>2</sup>

2π λd x2 d Dh � �

Wðxd, ydÞ ¼ exp −j

yields a new reconstruction distance zi [30]:

The second example of data mapping is related to visualization of transparent objects by a holographic display with phase encoding of the input data. The object beam Oðx, yÞ in this case was provided by simulation of a noiseless diffraction tomography experiment in which transmission

Figure 4. Schematic diagram of the chain holographic capture—data transfer—holographic display for far-infrared capture of a large object [29] and visible light visualization [25].

Figure 5. Numerical reconstructions of a virtual transparent object given in the left section of the figure as a 3D distribution of the refractive index nO (green colour, nO =1.001; red, nO = 1.002; yellow, nO =1.003; blue, nO = 1.004 [31]).

holograms of weakly refracting transparent object with a size of 25 μm were recorded by a phase-shifting technique [31]. The object had refractive index variation from 1 to 1.004 but due to its small size it gave rise to a strong diffraction. The capture parameters were: λ<sup>c</sup> ¼ 0.68 μm, Δ<sup>c</sup> ¼ 2.4043 μm, Nc ¼ 200 · 200, zo ¼ 68 μm [31]; the display parameters were as above. Direct optical reconstruction from the captured phase-only data failed. Usage of the full complex amplitude Oðx, yÞ provided numerical reconstruction as a concise 3D shape resembling closely the 3D refractive index distribution within the object (Figure 5). Omission of the amplitude, aOðx, yÞ, destroyed entirely this 3D shape. The observed severe distortions showed the necessity for phase modification in the hologram plane. This was done iteratively by applying the Gerchberg-Saxton algorithm at known correct complex amplitude at the reconstruction plane. Quality of the numerical reconstruction from the modified phase was satisfactory. Thus the problem with optical reconstruction was solved. Introduction of a digital magnifying lens at the SLM plane enlarged the reconstructed object to 6 mm [31]. This gives about 240 times magnification in comparison with its original size.
