5. Experimental results

We will demonstrate the depth estimation results and some applications related to the estimated depth map including all in focus, refocusing, defocus magnification, and 2D to 3D conversion. We use synthetic images, real images, and video frames to show the performance of the method.

### 5.1 Synthetic image

First, we evaluated the depth estimations on synthetic images. The scene images with the ground truth depth map are from [13], but the images and depth maps are not aligned. The proposed method is allowed to estimate the blurriness from a blur image; however, the given scene image is near to an all-in-focus image. Therefore, we align the depth map and the original image by nearest neighbor scaling method and then apply Eq. (4) to generate a blur map according to the given depth map with some fixed camera parameters. The transformed Gaussian blur kernels are controlled in the range from 0 to 8. A blur image is synthesized when the original scene image is convolved with the corresponding blur map. Two sets of the original scene images, depth maps of the ground truth, synthetic blurred images, and estimated depth maps are shown in Figure 7 (a), (b), (c), and (d), respectively. Perceptually, the estimated depth maps correspond to the ground truth depth maps with outlines.

Figure 7.

Datasets for the blurriness estimation. (a) Original scene image. (b) The ground truth depth map. (c) Synthetic blurred input images. (d) Estimated depth map.

Depth Extraction from a Single Image and Its Application DOI: http://dx.doi.org/10.5772/intechopen.84247

#### Figure 8.

(a) The original image. (b) Estimated depth map. (c) All-in-focus image, derived with the depth map in (a). (d) Quantized depth map into two layers. The darker area is the foreground, and the bright area is the background. (e) Refocus image. (f) Defocus magnification image.

#### 5.2 Real image

Second, we set real images to the depth estimation method. Figure 8 shows the results on a two-layer image, and the input image (a) is from [14]. The estimated depth map, synthesized all-in-focus image, quantized depth map, refocus image,

Figure 9.

(a) The poker card image. (b) Synthetic all-in-focus image from the estimated depth map. (c) Magnified regions from (a). (d) Magnified regions from (b).

and defocus magnification image are shown in Figure 8(b)–(f), respectively. With the quantized depth map, the applications of refocusing and defocus magnification can be manipulated from the all-in-focus image. Therefore, if obtaining a high quality of the depth map and all-in-focus image, the performance of these applications will be great.

Figure 10.

(a) The slanting brick wall image. (b) Synthetic all-in-focus image from the estimated depth map. (c) Magnified regions from (a). (d) Magnified regions from (b).

Figure 11. 2D to 3D conversion. (a) Input image (left-eye image). (b) Synthesized right-eye image. (c) Synthesized anaglyph 3D image.

Depth Extraction from a Single Image and Its Application DOI: http://dx.doi.org/10.5772/intechopen.84247

Figures 9 and 10 present the performance of the all-in-focus image. Figure 9(a) is a three-layer poker card image, and the camera was focused on the third row of cards. The blurriness increased from the bottom part of the image to the top. So, the first and second rows of cards are out of focus. Figure 9(b) shows the synthesized all-in-focus image, and (c) and (d) show the corresponding magnified regions from (a) and (b), respectively. Figure 10(a) was captured from a ramp of brick wall. As the camera was focused on the leftmost part of the image, the blurriness of the brick wall progressively increased from the left part of the image to the right. Figure 10(b) shows the synthesized all-in-focus image, and (c) and (d) show the magnified regions. From these two sets of image, the results show a significant improvement when comparing to the original images.

#### 5.3 Video frame

Third, we apply the method to the video frames. In this subsection, we show the performance of 2D to 3D conversion. The video frames in Figure 11(a) are used as the input left-eye images, which are from YouTube; (b) are the synthesized righteye images, which are from left-eye images and corresponding depth maps; and (c) are the synthesized anaglyphs, which are from (a) and (b). With the anaglyph glasses, the 3D effect will be visualization. Combine the mobile device with the stereo image and VR device, such as Google Cardboard; the 3D effect also will be visualized.
