3. Computer generation of 3D contents for holographic imaging

#### 3.1. Methods for computer generation of holographic fringe patterns

A CGH is a fringe pattern that diffracts light into a wavefront with desired amplitude and phase distributions and seems to be the most appropriate choice for 3D content generation. This wavefront can be created both for real and virtual objects. The goal in developing the CGHs input data for a 3D holographic display is to have real-time generation of large-scale wide-viewing angle full-parallax colour holograms which provide photorealistic reconstruction that can be viewed with both eyes. These CGHs must support a motion parallax and the coupled occlusion effect expressed in the visible surface change according to the viewer position. For the purpose, a CGH must have a very large number of samples displayed on a device with high spatial resolution. So the most important requirements to CGH synthesis are computational efficiency and holographic image quality. The CGH computation involves digital representation of the 3D object which includes not only its geometrical shape but also texture and lighting conditions, simulation of light propagation from the object to the CGH plane, and encoding of the fringe pattern formed by the interference of the object light wavefront with a reference beam in the display data format.

There are two basic frameworks for CGH generation depending on the mathematical models of 3D target objects: (i) point cloud algorithms and (ii) polygon-based algorithms. In the pointcloud method [32], the 3D object is a collection of P self-luminous point sources. The method traces the ray from a source 'p' with spatial coordinates ðxp, yp, zpÞ to the point ðξ, ηÞ on the hologram plane at z ¼ 0 and is sometimes referred to as a ray-tracing approach; the distance between both points is rp ¼ h ðξ−xpÞ <sup>2</sup> þ ðη−yp<sup>Þ</sup> <sup>2</sup> <sup>þ</sup> <sup>z</sup><sup>2</sup> p i<sup>1</sup>=<sup>2</sup> (Figure 6). Each point source emits a spherical wave with an amplitude, ap, and an initial phase, φp. The amplitude and phase distributions in the point cloud can be controlled individually. The fringe patterns for all object points are added up at the hologram plane to obtain the CGH. The method is highly flexible due to ability to represent surfaces with arbitrary shapes and textures, but it is very time consuming.

Polygon-based representation is a wave-oriented approach [33–35]. The object is a collection of P planar segments of a polygonal shape (Figure 7). Each polygon is a tilted surface source of a light field calculated by propagation of its angular spectrum of plane wave decomposition [27] using a fast Fourier transform (FFT). An angular-dependent rotational transformation in the spectral domain is applied to find the spectrum in a plane parallel to the hologram in the global coordinate system from the spectrum in the tilted plane of the local coordinate system ðxp, yp, zpÞ, p ¼ 1, 2:::P for each polygon [36, 37]. The z-axis in the local coordinate system is along the vector to the polygon surface. The object field is found after FFT of the final angular spectrum which is a sum of the transformed angular spectra of the polygon fields in the global

Figure 6. CGH synthesis from a point cloud model.

holograms of weakly refracting transparent object with a size of 25 μm were recorded by a phase-shifting technique [31]. The object had refractive index variation from 1 to 1.004 but due to its small size it gave rise to a strong diffraction. The capture parameters were: λ<sup>c</sup> ¼ 0.68 μm, Δ<sup>c</sup> ¼ 2.4043 μm, Nc ¼ 200 · 200, zo ¼ 68 μm [31]; the display parameters were as above. Direct optical reconstruction from the captured phase-only data failed. Usage of the full complex amplitude Oðx, yÞ provided numerical reconstruction as a concise 3D shape resembling closely the 3D refractive index distribution within the object (Figure 5). Omission of the amplitude, aOðx, yÞ, destroyed entirely this 3D shape. The observed severe distortions showed the necessity for phase modification in the hologram plane. This was done iteratively by applying the Gerchberg-Saxton algorithm at known correct complex amplitude at the reconstruction plane. Quality of the numerical reconstruction from the modified phase was satisfactory. Thus the problem with optical reconstruction was solved. Introduction of a digital magnifying lens at the SLM plane enlarged the reconstructed object to 6 mm [31]. This gives about 240 times

Figure 5. Numerical reconstructions of a virtual transparent object given in the left section of the figure as a 3D distribution of the refractive index nO (green colour, nO =1.001; red, nO = 1.002; yellow, nO =1.003; blue, nO = 1.004 [31]).

3. Computer generation of 3D contents for holographic imaging

A CGH is a fringe pattern that diffracts light into a wavefront with desired amplitude and phase distributions and seems to be the most appropriate choice for 3D content generation. This wavefront can be created both for real and virtual objects. The goal in developing the CGHs input data for a 3D holographic display is to have real-time generation of large-scale wide-viewing angle full-parallax colour holograms which provide photorealistic reconstruction that can be viewed with both eyes. These CGHs must support a motion parallax and the coupled occlusion effect expressed in the visible surface change according to the viewer position. For the purpose, a CGH must have a very large number of samples displayed on a device with high spatial resolution. So the most important requirements to CGH synthesis are computational efficiency and holographic image quality. The CGH computation involves digital representation of the 3D object which includes not only its geometrical shape but also texture and lighting conditions, simulation of light propagation from the object to the CGH

3.1. Methods for computer generation of holographic fringe patterns

magnification in comparison with its original size.

190 Holographic Materials and Optical Systems

Figure 7. CGH synthesis from a polygon-based model.

coordinate system. Computation of a polygon field is slower than that of a spherical wave emitted by a point light source, but the number of polygons is much smaller than that of point sources and the total computation time is shorter compared to the point cloud approach. The traditional polygon-based method evolved to analytical implementation when the angular spectrum of a triangle of arbitrary size, shape, orientation and location in space is analytically calculated from the known spectrum of a reference triangle [38–40]. The analytical method eliminates the need to apply FFT for each polygon.

The CGH synthesis of real objects can be carried out by 3D capture based on holographic means or structured light methods under coherent or incoherent illumination [41]. The output from, e.g. profilometric/tomographic reconstruction can be converted into a point cloud which allows for CGHs synthesis. The substantial advantage is the option to adapt the captured data to any holographic display. Incoherent capture of multiple projection images to generate holographic data has a lot of advantages such as incoherent illumination, no need of interferometric equipment and display of large objects. The concept was advanced 40 years ago in [42] by generating a holographic stereogram (HS). High quality of large format HSs as a ray-based display, especially those printed by HS printers, are well known [43]. The input data for HS imaging are composed from colour and directional information. This causes a decrease of resolution for deep scenes and blurring. Introduction of a ray-sampling plane close to the object and computation of the light wavefront from this plane to the hologram is proposed in Ref. [44]. Synthesis of a full-parallax colour CGH from multiple 2D projection images captured by a camera scanned along a 2D grid is proposed in Refs. [45, 46]. The approach given in Ref. [46] relies on calculation of the 3D Fourier spectrum and was further improved [47] by developing a parabolic sampling of the spectrum for data extraction and needing only 1D camera scanning. Methods in which directional information from projection images is combined with a depth map are under development [48–50].

Over the last decade, the efforts were focused on improving image quality by different rendering techniques and on accelerating the CGH computation. The holographic data—amplitude and phase—should encode occlusions, materials and roughness of the object surface, reflections, surface glossiness and transparency. It is difficult to create an occlusion effect using 3D object representation as a collection of primitives—points or polygons—due to the independent contribution of all primitives to the light field. To decrease the computational cost of occlusion synthesis, a silhouette mask approach has been proposed in the polygon-based computation [35]. The mask produced by the orthographic projection of the foreground objects blocks the wavefront of light coming from background objects. The method allows for synthesis of a very large CGH [35] but is prone to errors at oblique incidence. Computation is accelerated if occlusion is included in a light-ray rendering process from multiple 2D projection images during the synthesis of a CGH as an HS [51]. As the method suffers from decrease of angular resolution in deep scenes, accuracy is improved by processing occlusion in the lightray domain along with sampling the angular information from the projection images. This is done in a virtual ray-sampling plane [52, 53]. Furthermore, the sampled data are converted by Fourier transform to the object beam complex amplitude. Considering occlusion as geometric shadowing [54], effective CGH synthesis can be carried out by casting from each sample at the hologram plane, a bundle of rays at uniform angular separation within the diffraction angle given in Eq. (2). Such approaches are described in Refs. [54–56] with representing the 3D objects as composed from planar segments parallel to the hologram plane [55] or performing a lateral shear of the 3D scene to use the z-buffer of the graphic processing unit (GPU) for the rays with the same direction to accelerate computation [54]. Occlusion, texture and illumination issues can be handled by computer graphics techniques. Their effective use is possible when the ray casting is applied by spatially dividing the CGH into a set of sub-holograms and building different sets of points or polygons for them [57–59].

coordinate system. Computation of a polygon field is slower than that of a spherical wave emitted by a point light source, but the number of polygons is much smaller than that of point sources and the total computation time is shorter compared to the point cloud approach. The traditional polygon-based method evolved to analytical implementation when the angular spectrum of a triangle of arbitrary size, shape, orientation and location in space is analytically calculated from the known spectrum of a reference triangle [38–40]. The analytical method

The CGH synthesis of real objects can be carried out by 3D capture based on holographic means or structured light methods under coherent or incoherent illumination [41]. The output from, e.g. profilometric/tomographic reconstruction can be converted into a point cloud which allows for CGHs synthesis. The substantial advantage is the option to adapt the captured data to any holographic display. Incoherent capture of multiple projection images to generate holographic data has a lot of advantages such as incoherent illumination, no need of interferometric equipment and display of large objects. The concept was advanced 40 years ago in [42] by generating a holographic stereogram (HS). High quality of large format HSs as a ray-based display, especially those printed by HS printers, are well known [43]. The input data for HS imaging are composed from colour and directional information. This causes a decrease of resolution for deep scenes and blurring. Introduction of a ray-sampling plane close to the object and computation of the light wavefront from this plane to the hologram is proposed in Ref. [44]. Synthesis of a full-parallax colour CGH from multiple 2D projection images captured by a camera scanned along a 2D grid is proposed in Refs. [45, 46]. The approach given in Ref. [46] relies on calculation of the 3D Fourier spectrum and was further improved [47] by developing a parabolic sampling of the spectrum for data extraction and needing only 1D camera scanning. Methods in which directional information from projection images is com-

Over the last decade, the efforts were focused on improving image quality by different rendering techniques and on accelerating the CGH computation. The holographic data—amplitude and phase—should encode occlusions, materials and roughness of the object surface, reflections, surface glossiness and transparency. It is difficult to create an occlusion effect using 3D object representation as a collection of primitives—points or polygons—due to the independent contribution of all primitives to the light field. To decrease the computational cost of occlusion synthesis, a silhouette mask approach has been proposed in the polygon-based computation [35]. The mask produced by the orthographic projection of the foreground objects blocks the wavefront of light coming from background objects. The method allows for synthesis of a very large CGH [35] but is prone to errors at oblique incidence. Computation is accelerated if occlusion is included in a light-ray rendering process from multiple 2D projection images during the synthesis of a CGH as an HS [51]. As the method suffers from decrease of angular resolution in deep scenes, accuracy is improved by processing occlusion in the lightray domain along with sampling the angular information from the projection images. This is done in a virtual ray-sampling plane [52, 53]. Furthermore, the sampled data are converted by Fourier transform to the object beam complex amplitude. Considering occlusion as geometric shadowing [54], effective CGH synthesis can be carried out by casting from each sample at the hologram plane, a bundle of rays at uniform angular separation within the diffraction angle

eliminates the need to apply FFT for each polygon.

192 Holographic Materials and Optical Systems

bined with a depth map are under development [48–50].

At specular reflection, the viewer is able to see only part of the object while the diffuse reflection sends light rays in all directions. Both types of reflections must be encoded in a CGH by adopting different reflection models to represent texture of the objects [60–62]. In the CGH synthesis, the luminance is encoded in the amplitude while reflectance is incorporated as a phase term. The task of representing reflection becomes rather complicated at non-plane wave illumination or in the case of background illumination [60]. A perfect diffuse reflection is achieved by adding a uniformly distributed random phase. Unfortunately, this causes speckle noise at reconstruction [63]. A variety of methods have been proposed for fast synthesis of CGHs as look-up table methods with pre-computed fringes [63, 64], recurrence relation methods instead of directly calculating the optical path [65], introduction of wavefront recording plane [66], HS methods and many others. Hardware solutions as special-purpose computers like 'Holographic ReconstructioN (HORN)' [67] or GPU computing [32, 68–70] are very effective for fast calculation because the pixels on a CGH can be calculated independently.

#### 3.2. Phase-added holographic stereogram as a fast computation approach

Effective acceleration of computation is achieved in coherent stereogram (CS) algorithms when the CGH is partitioned into segments and the directional data for each segment are sampled (Figure 8). A similar idea has been advanced in the diffraction-specific fringe computation by Lucente [71] with partitioning the hologram into holographic elements called hogels and each hogel having a linear superposition of weighted basic fringes corresponding to the points in a point cloud. Each segment in the CS emits a bundle of light rays that form a wavefront as a set of patches of mismatched plane waves due to lack of depth information. This drawback was overcome by adding a distance-related phase [72]. The phase-added stereogram (PAS) is

Figure 8. Synthesis of a CGH as a coherent stereogram with partitioning the hologram plane into square segments and sampling the directional information.

computationally effective if implemented by FFT. To clarify this point, we depict schematically the PAS computation with FFT in Figure 9.

In CS and PAS algorithms, the hologram is partitioned into M · N equal square segments with S · S pixels. The object is described by a point cloud with P points. The segment size, ΔdS · ΔdS, where Δ<sup>d</sup> is the pixel period at the hologram plane, is chosen small enough to approximate the spherical wave from a point as a plane wave given by a 2D complex harmonic function within the segment. This approximation means that the contribution from a point source is constant across the segment and is determined with respect only to its central pixel. In this way, the input data and computation time are substantially reduced; for the segment ðm, nÞ, m ¼ 1::M, n ¼ 1::N the contribution from the point 'p' comprises spatial frequencies <sup>ð</sup>up mn, v p mnÞ of the plane wave at a wavelength λd, the distance between the point 'p' and the central point, r p mn, the initial phase of the sinusoid, Φ<sup>p</sup> mn. The spatial frequencies are determined by the illuminating angles, <sup>ð</sup>Θ<sup>p</sup> mn, Ω<sup>p</sup> mnÞ, of the ray coming from the point 'p' to the central point of the segment ðm, nÞ and angles θ<sup>R</sup><sup>ξ</sup> and θ<sup>R</sup><sup>η</sup> of the plane reference wave with respect to ξ and η axes at the hologram plane as follows: u<sup>p</sup> mn ¼ ð sin <sup>Θ</sup><sup>p</sup> mn− sin θ<sup>R</sup>ξÞ=λd, v p mn ¼ ð sin <sup>Ω</sup><sup>p</sup> mn− sin θ<sup>R</sup>ηÞ=λd. The phases Φ<sup>p</sup> mn, m ¼ 1::M, n ¼ 1::N ensure matching of the wavefronts of the plane waves diffracted from all segments and may contain the initial phase φ<sup>p</sup> and also the distance-related phase 2πr p mn=λd. For all object points, the fringe pattern across the segment is approximated as a superposition of 2D complex sinusoids. Computation of this pattern is carried out by placing the amplitudes of the sinusoids to the corresponding frequency locations in the spatial frequency domain and by applying an inverse Fourier transform to the spectrum. FFT implementation is the second step for acceleration of CGH computation (Figure 9). The FFT step moves the spatial frequencies to the nearest allowed values in the discrete frequency domain. The complex amplitudes remain the same. The two-step procedure is repeated for each segment to compute the CGH.

The PAS approximation should yield a wavefront close to the wavefront provided by the Rayleigh-Sommerfeld diffraction model that treats the propagating light from a point as a spherical wave. The complex amplitude in the reference model is given by:

$$\mathcal{O}\_O^{\rm RS}(\xi,\eta) = \sum\_{p=1}^{p} \frac{A\_p}{r\_p} \exp\left(j\frac{2\pi}{\lambda\_d}r\_p\right),\\A\_p = a\_p \exp\left(j\phi\_p\right) \tag{6}$$

Figure 9. Schematic representation of the synthesis of a CGH within a segment.

We applied PAS computation to generate digital input contents for a wavefront printer developed by us for printing a white-light viewable full-parallax reflection hologram [73, 74]. The printed hologram was recorded as a 2D array of elemental holograms. The CGH for each elemental hologram was fed to amplitude SLM with 1920 + 1080 pixels. The object beam encoded in the CGH was extracted by spatial filtering and demagnified using a telecentric lens system. Unlike the HS printers [43], the wavefront printer uses full holographic data. That is why the synthesis of a large number of elemental holograms, e.g. 100 +100, takes a very long time. This requires a fast computation method that provides quality of imaging close to the reference model. We solved this task by developing a fast PAS (FPAS) method [75] as a further elaboration of the already existing PAS methods. Usage of the FFT is crucial for fast PAS implementation, but this may affect negatively the quality of imaging due to spatial frequencies mapping to a predetermined coarse set of discrete values. The sampling step, 1=SΔd, in the frequency domain could not be made small due to necessity to approximate the reference model. Thus, the fringe pattern generated by the PAS with FFT inaccurately steers the diffracted light. The improvements developed to compensate the error caused by the frequency mapping are based on the two possible ways for steering control—phase compensation and finer sampling of the spectrum attached to each segment. The functional form of the developed approximations is shown in Table 1 which gives the fringe pattern at a single spatial frequency in the segment <sup>ð</sup>m, <sup>n</sup>Þ; <sup>ð</sup>ξ<sup>c</sup> mn, η<sup>c</sup> mnÞ is the central point of the segment and the following notation is introduced for the complex sinusoid:

$$F(\boldsymbol{\mu}\_{\rm mn}^p, \boldsymbol{\upsilon}\_{\rm mn}^p) = (A\_p/r\_{\rm mn}^p) \exp\left\{j2\pi[\boldsymbol{\mu}\_{\rm mn}^p(\xi-\xi\_{\rm mn}^c) + \boldsymbol{\upsilon}\_{\rm mn}^p(\eta-\eta\_{\rm mn}^c)]\right\} \tag{7}$$

The first improvement CPAS (compensated PAS) [76] performed some steering correction by adding the phase, which includes the difference between the spatial frequencies in the continuous and the discrete domains. The CPAS provided a better reconstructed image than the PAS with FFT at almost the same calculation time. Finer sampling was proposed in the accurate PAS (APAS) [77] by computing the FFT in an area which exceeds the segment and by properly truncating the larger-size IFFT output. Phase compensation and directional error reduction by

#### Fringe pattern of the method

computationally effective if implemented by FFT. To clarify this point, we depict schematically

In CS and PAS algorithms, the hologram is partitioned into M · N equal square segments with S · S pixels. The object is described by a point cloud with P points. The segment size, ΔdS · ΔdS, where Δ<sup>d</sup> is the pixel period at the hologram plane, is chosen small enough to approximate the spherical wave from a point as a plane wave given by a 2D complex harmonic function within the segment. This approximation means that the contribution from a point source is constant across the segment and is determined with respect only to its central pixel. In this way, the input data and computation time are substantially reduced; for the segment ðm, nÞ, m ¼ 1::M, n ¼ 1::N

mn, v p

mn ¼ ð sin <sup>Ω</sup><sup>p</sup>

p

mn. The spatial frequencies are determined by the illuminating angles,

mnÞ, of the ray coming from the point 'p' to the central point of the segment ðm, nÞ

mn− sin θ<sup>R</sup>ξÞ=λd, v

mn, m ¼ 1::M, n ¼ 1::N ensure matching of the wavefronts of the plane waves diffracted

and angles θ<sup>R</sup><sup>ξ</sup> and θ<sup>R</sup><sup>η</sup> of the plane reference wave with respect to ξ and η axes at the

from all segments and may contain the initial phase φ<sup>p</sup> and also the distance-related phase

The PAS approximation should yield a wavefront close to the wavefront provided by the Rayleigh-Sommerfeld diffraction model that treats the propagating light from a point as a

> 2π λd rp

mn=λd. For all object points, the fringe pattern across the segment is approximated as a superposition of 2D complex sinusoids. Computation of this pattern is carried out by placing the amplitudes of the sinusoids to the corresponding frequency locations in the spatial frequency domain and by applying an inverse Fourier transform to the spectrum. FFT implementation is the second step for acceleration of CGH computation (Figure 9). The FFT step moves the spatial frequencies to the nearest allowed values in the discrete frequency domain. The complex amplitudes remain the same. The two-step procedure is repeated for each segment to

p

, Ap ¼ ap exp ðjφpÞ (6)

mnÞ of the plane wave at a

mn, the initial phase of

mn− sin θ<sup>R</sup>ηÞ=λd. The

the PAS computation with FFT in Figure 9.

194 Holographic Materials and Optical Systems

the sinusoid, Φ<sup>p</sup>

compute the CGH.

hologram plane as follows: u<sup>p</sup>

<sup>ð</sup>Θ<sup>p</sup> mn, Ω<sup>p</sup>

2πr p

phases Φ<sup>p</sup>

the contribution from the point 'p' comprises spatial frequencies <sup>ð</sup>up

wavelength λd, the distance between the point 'p' and the central point, r

mn ¼ ð sin <sup>Θ</sup><sup>p</sup>

spherical wave. The complex amplitude in the reference model is given by:

Ap rp

exp j

P p¼1

ORS

<sup>O</sup> ðξ, ηÞ ¼ ∑

Figure 9. Schematic representation of the synthesis of a CGH within a segment.

CS: <sup>F</sup>ðu<sup>p</sup> mn, v p mnÞ PAS (no FFT): <sup>F</sup>ðu<sup>p</sup> mn, v p mn<sup>Þ</sup> exp <sup>ð</sup>jkr<sup>p</sup> mnÞ PAS (FFT): <sup>F</sup>ðu^<sup>p</sup> mn, v^<sup>p</sup> mn<sup>Þ</sup> exp <sup>ð</sup>jkrp mnÞ CPAS:Fðu^<sup>p</sup> mn, v^<sup>p</sup> mn<sup>Þ</sup> exp <sup>ð</sup>jkr<sup>p</sup> mn<sup>Þ</sup> · exp <sup>f</sup>j2π½ðu^<sup>p</sup> mn−u<sup>p</sup> mnÞðξ<sup>c</sup> mn−xpÞþðv^<sup>p</sup> mn−v p mnÞðη<sup>c</sup> mn−ypÞ�g APAS: <sup>F</sup>ðu^′<sup>p</sup> mn, <sup>v</sup>^′<sup>p</sup> mn<sup>Þ</sup> exp <sup>ð</sup>jkr<sup>p</sup> mnÞ ACPAS: <sup>F</sup>ðu^′<sup>p</sup> mn, <sup>v</sup>^′<sup>p</sup> mn<sup>Þ</sup> exp <sup>ð</sup>jkr<sup>p</sup> mn<sup>Þ</sup> · exp <sup>f</sup>j2π½ðu^′<sup>p</sup> mn−u<sup>p</sup> mnÞðξ<sup>c</sup> mn−xpÞþðv^′<sup>p</sup> mn−v p mnÞðη<sup>c</sup> mn−ypÞ�g FPAS: <sup>F</sup>ðu^′<sup>p</sup> mn, <sup>v</sup>^′<sup>p</sup> mn<sup>Þ</sup> exp <sup>ð</sup>jkr<sup>p</sup> mn<sup>Þ</sup> · exp <sup>f</sup>j2π½u<sup>p</sup> mnðξ<sup>c</sup> mn−xpÞ þ v p mnðη<sup>c</sup> mn−ypÞ�g Spatial frequencies: u^<sup>p</sup> mn <sup>¼</sup> lu <sup>S</sup>Δ<sup>d</sup> , <sup>v</sup>^<sup>p</sup> mn <sup>¼</sup> <sup>l</sup><sup>υ</sup> <sup>S</sup>Δ<sup>d</sup> ; <sup>−</sup> <sup>S</sup> <sup>2</sup> ≤lu, lυ≤ <sup>S</sup> <sup>2</sup>; <sup>u</sup>^′<sup>p</sup> mn <sup>¼</sup> lu S′ Δd , v^′<sup>p</sup> mn <sup>¼</sup> lv S′ Δd ; − <sup>S</sup>′ <sup>2</sup> <sup>≤</sup>lu, <sup>l</sup>υ<sup>≤</sup> <sup>S</sup>′ <sup>2</sup> , <sup>S</sup>′ <sup>&</sup>lt; <sup>S</sup>

Table 1. Single frequency fringe pattern in the segment.

Figure 10. Photographs of reconstruction from printed holograms: (a)–(c): different views of a church model; (d): 9 cm +9 cm printed hologram of a bunch of flowers.

finer sampling were merged into a single step in the algorithm ACPAS [57] which yielded quality of reconstruction very close to the reference model. The best results are provided by the FPAS algorithm which is characterized with better phase compensation than the previous methods. This was confirmed by quality assessment with conventional image-based objective metrics as intensity distribution and peak signal-to-noise ratio [75] for reconstruction of a single point and also by good quality of reconstruction from white-light viewable colour holograms (Figure 10) printed by our wavefront printing system [73] on an extra-fine grain silverhalide emulsion Ultimate08 [78]. The CGH computed by the FPAS algorithm for each elemental hologram was displayed on an amplitude type SLM. The demagnified pixel interval was 0.42 μm at the plane of the hologram and gives a diffraction angle of 39.3°. For uniform illumination of the CGH on the SLM without decreasing too much the laser beam intensity we used only 852 + 852 pixels in the SLM to project CGHs. Thus, the size of the elemental hologram became equal to 0.38 mm by 0.38 mm. The printed holograms are shown in Figure 10; their size is 5 cm +5 cm and 9 cm + 9 cm. The smaller hologram consisted of 131 + 131 elemental holograms. The segment size for calculating the CGH fed to a given elemental hologram was 32 +32 pixels while the FFT computation area was 128 +128 pixels, and each elemental hologram comprised more than 700 segments.
