**3. 3D reconstruction**

Shape, or depth profile, reconstruction, is based on measurement depth information from 2D images, and now it has been widely used in many fields, such as medicine, robotics, and remote sensing.

All existing DFD algorithms can be divided into two kinds, local DFD algorithm and global DFD algorithm. In local DFD, a window around every pixel point is predefined, and the point's blurring is defined as that of the window (Pentland,1987; Vinay & Subhasis,2007). However, the difficulty of selecting proper size of window is a well known disadvantage of DFD algorithm, because there is a trade-off between having a window that is as large as possible to average out noise, but as small as possible to guarantee that within it (Ens &Lawrence ,1993; Nair & Stewart,1992). As far as global DFD is concerned, its main idea is completely different with local DFD algorithm since it works on the entire image without information of its radiance, or the appearance of the surfaces, and depth. Therefore, it is necessary to construct the depth model and the radiance model simultaneously (Favaro et al 2008, 2003 2002). This, however, will bring the problem of huge computation cost. A general method to solve this problem is to simplify the imaging model, for example, assuming the scene contains "sharp edges", that is, there are discontinuities in the scene (Asada et al 1998). Another way is to use a cubic function or structure light to approach the radiance (Nayar et al 1996; Lagnado & Osher,1997). Unfortunately, both local DFD and global DFD are on the basis of attaining two defocused images with different camera parameters which may destroy the camera drastically if the camera's amplification level is high.

In this section a novel DFD method with single fixed optical microscope is proposed to reconstruct the shape of samples on micro/nano scale. In the method, the blurring image model is constructed with the relative blurring and the diffusion equation, and the relation between depth and blurring is discussed from four aspects. The method proposed needs only one microscope with unchanged camera parameters, so the reconstruction process is very simple. The experiments and error analysis results show that it can reconstruct shape on micr/nano scale.

#### **3.1 The imaging model for defocus**

538 Mechanical Engineering

Shape, or depth profile, reconstruction, is based on measurement depth information from 2D images, and now it has been widely used in many fields, such as medicine, robotics, and

Driving voltage(v)

<sup>0</sup> <sup>20</sup> <sup>40</sup> <sup>60</sup> <sup>80</sup> <sup>100</sup> <sup>120</sup> <sup>140</sup> <sup>160</sup> <sup>180</sup> <sup>200</sup> <sup>0</sup>

All existing DFD algorithms can be divided into two kinds, local DFD algorithm and global DFD algorithm. In local DFD, a window around every pixel point is predefined, and the point's blurring is defined as that of the window (Pentland,1987; Vinay & Subhasis,2007). However, the difficulty of selecting proper size of window is a well known disadvantage of DFD algorithm, because there is a trade-off between having a window that is as large as possible to average out noise, but as small as possible to guarantee that within it (Ens &Lawrence ,1993; Nair & Stewart,1992). As far as global DFD is concerned, its main idea is completely different with local DFD algorithm since it works on the entire image without information of its radiance, or the appearance of the surfaces, and depth. Therefore, it is necessary to construct the depth model and the radiance model simultaneously (Favaro et al 2008, 2003 2002). This, however, will bring the problem of huge computation cost. A general method to solve this problem is to simplify the imaging model, for example, assuming the scene contains "sharp edges", that is, there are discontinuities in the scene (Asada et al 1998). Another way is to use a cubic function or structure light to approach the radiance (Nayar et al 1996; Lagnado & Osher,1997). Unfortunately, both local DFD and global DFD are on the basis of attaining two defocused images with different camera parameters which

may destroy the camera drastically if the camera's amplification level is high.

In this section a novel DFD method with single fixed optical microscope is proposed to reconstruct the shape of samples on micro/nano scale. In the method, the blurring image model is constructed with the relative blurring and the diffusion equation, and the relation between depth and blurring is discussed from four aspects. The method proposed needs

(d) 0V-200V-0V,0V-160V-0V,0V-120V-0V,0V-80V-0V, 0V-40V-0V

Fig. 10. The driving characteristic of a piezoelectric actuator

Movement(nm)

2000

4000

6000

8000

10000

12000

14000

**3. 3D reconstruction** 

remote sensing.

In the defocus imaging model, a defocused image can be theoretically considered as the summation of some defocused points, and this process can be denoted by the following convolution function normally:

$$E(\mathbf{x}, \mathbf{y}) = I(\mathbf{x}, \mathbf{y}) \* h(\mathbf{x}, \mathbf{y}) \tag{17}$$

where *E*(*x*, *y*) and *I*(*x*, *y*) are the defocused image and the focused image, respectively, *h*(*x*, *y*) is the point spread function.

When the point spread function is approximated by a shift-invariant Gauss function, the imaging model in Eq.(17) can be formulated in terms of the isotropic heat equation:

$$\begin{cases} \dot{u}(\mathbf{x}, \mathbf{y}, t) = a \Delta u(\mathbf{x}, \mathbf{y}, t) \; a \in [0, \infty) \; t \in (0, \infty) \\ u(\mathbf{x}, \mathbf{y}, 0) = E(\mathbf{x}, \mathbf{y}) \end{cases} \tag{18}$$

where *a* is the diffusion coefficient, *<sup>u</sup> u t* ," "denotes the Laplacian operator,

$$
\Delta \boldsymbol{\mu} = \frac{\partial^2 \boldsymbol{\mu}(\boldsymbol{\x}\_{\prime} \boldsymbol{y}\_{\prime} \boldsymbol{t})}{\partial \boldsymbol{x}^2} + \frac{\partial^2 \boldsymbol{\mu}(\boldsymbol{x}\_{\prime} \boldsymbol{y}\_{\prime} \boldsymbol{t})}{\partial \boldsymbol{y}^2} \boldsymbol{.}
$$

If the depth map is an equifocal plane, *a* is constant. Otherwise, *a* is shift-variant, and the diffusion equation becomes:

$$\begin{cases} \dot{u}(\mathbf{x}, \mathbf{y}, t) = \nabla \cdot (a(\mathbf{x}, \mathbf{y}) \nabla u(\mathbf{x}, \mathbf{y}, t)) \; t \in (0, \infty) \\ u(\mathbf{x}, \mathbf{y}, 0) = r(\mathbf{x}, \mathbf{y}) \end{cases} \tag{19}$$

where " "denotes the gradient operator and " "is the divergence operator, , *T xy xy* .

It is also easy to verify that the variance is related to the diffusion coefficient *a* via:

$$
\sigma^2 = 2ta
\tag{20}
$$

Suppose there are two images *E*1(*x*, *y*) and *E*2(*x*, *y*) for two different focus setting, also, 1 2 (that is, *E*1(*x*, *y*) is more defocused than *E*2(*x*, *y*)), then *E*2 (*x*, *y*) can be written as:

$$\begin{split} E\_z(\mathbf{x}, y) &= \| \frac{1}{2\pi\sigma\_z^2} \exp(-\frac{(\mathbf{x} - \boldsymbol{\mu})^2 + (\boldsymbol{\underline{y}} - \boldsymbol{\underline{v}})^2}{2\sigma\_z^2}) r(\boldsymbol{\mu}, v) dudv \\ &= \| \frac{1}{2\pi\boldsymbol{\Delta}\sigma^2} \exp(-\frac{(\boldsymbol{\underline{x}} - \boldsymbol{\underline{u}})^2 + (\boldsymbol{\underline{y}} - \boldsymbol{\underline{v}})^2}{2\boldsymbol{\underline{\Delta}\sigma^2}}) E\_\mathbf{i}(\boldsymbol{\mu}, v) dudv \end{split} \tag{21}$$

Applications of Computer Vision in Micro/Nano Observation 541

Suppose *s*0 is the focus depth, and 1 2 *s xy s xy sxy* (,) (,) (,) . Based on the diffusion

( , , ) ( ( , ) ( , , )) ( , )

2 1 2 2 2 22

( , ) ( ( , ) ( , ))

*xy b xy b xy*

2 2 2 22

0 2 0 1

11 11

[ ] (,) (,)

*s s xy s s xy*

2 1

, thus the desired depth map is:

0 1 *s xy* ( , ) 1/( ) *k s*

Microscope

In real applications, it is reasonable to discuss the following four cases when the distance

In this case, *E*1 (*x*, *y*) and *E*2 (*x*, *y*) are on the large side of *s*0, and *E*1(*x*, *y*) is more defocused than *E*2(*x*, *y*), so it is backward diffusion process from *E*1 (*x*, *y*) to *E*2(*x*, *y*), that is, the diffusion efficient *a* is negative. The theory is shown as Fig. 13 and the finial depth can be

> 0 1 *s xy* ( , ) 1/( ) *k*

[ ] (,) (,)

*f v s xy f v s xy*

11 1 11 1

*uxyt axy uxyt t*

0

(29)

*<sup>s</sup>* (30)

*s*1(*x*,*y*)

*s*2(*x*,*y*)

*s*0

(27)

(28)

1 2

2 22 2

0

 

(,,) (,) (,, ) (,)

*uxy E xy uxy t E xy*

> 

2

between the sample and the microscope is becoming shorter and shorter.

2

equations in section 2, the following functions can be given:

4

*D v*

4 11

*D v s s xy*

 

2 22

Fig. 12. The theory of case A

Define:

*k*

a. s1> s2>s0

denoted as:

4

<sup>2</sup> <sup>2</sup>

0 1

△s2(*x*,*y*)

(,)

*D v*

where the relative blurring can be denoted as:

where 2 22 2 1 is called the relative blurring(Favaro et al, 2008). So Eq.(18) can be written as:

$$\begin{cases} \dot{u}(\mathbf{x}, \mathbf{y}, t) = a \Delta u(\mathbf{x}, \mathbf{y}, t) \; a \in [0, \infty) \; t \in (0, \infty) \\ u(\mathbf{x}, \mathbf{y}, 0) = E\_1(\mathbf{x}, \mathbf{y}) \end{cases} \tag{22}$$

Eq.(19) becomes:

$$\begin{cases} \dot{u}(\mathbf{x}, \mathbf{y}, t) = \nabla \cdot (a(\mathbf{x}, \mathbf{y}) \nabla u(\mathbf{x}, \mathbf{y}, t)) \, \text{t} \in (0, \infty) \\ u(\mathbf{x}, \mathbf{y}, 0) = E\_1(\mathbf{x}, \mathbf{y}) \end{cases} \tag{23}$$

When the time-shifted is *t* ,the solution of the diffusion equation is <sup>2</sup> *uxy t E xy* (,, ) (,) ,and *t* can be defined as,

$$
\Delta \sigma^2 = \mathcal{Z}(t\_2 - t\_1)a \doteq 2\Delta t a \tag{24}
$$

Thus, the relation between the relative blurring and the depth map can be denoted as:

$$
\Delta \sigma^2 = \chi^2 \left( b\_2^2 - b\_1^2 \right) \tag{25}
$$

where is a constant between the blurring radius and the blurring degree, *bi* (*i*=1,2) is the radius of the burring round :

$$b = \frac{Dv}{2} \left| \frac{1}{f} - \frac{1}{v} - \frac{1}{s} \right| \tag{26}$$

where *s* denotes depth of the blurring point and *D* denotes the radius of the lens.

#### **3.2 The new shape reconstruction method**

Suppose *E*1(*x*, *y*), whose depth map is *s*1(*x*, *y*), is the defocused image attained before depth variation, and *E*2 (*x*, *y*) is another defocused image attained after depth variation, in this section, we will propose a new shape from defocus method in which the depth map *s*2 (*x*, *y*) is attained through a depth change Δ*s*, and the main theory is shown in Fig.12.

Fig. 11. The main theory of our method

Suppose *s*0 is the focus depth, and 1 2 *s xy s xy sxy* (,) (,) (,) . Based on the diffusion equations in section 2, the following functions can be given:

$$\begin{cases} \dot{u}(\mathbf{x}, \boldsymbol{y}, t) = \nabla \cdot (a(\mathbf{x}, \boldsymbol{y}) \nabla u(\mathbf{x}, \boldsymbol{y}, t)) \; t \in (0, \infty) \\ u(\mathbf{x}, \boldsymbol{y}, 0) = E\_1(\mathbf{x}, \boldsymbol{y}) \\ u(\mathbf{x}, \boldsymbol{y}, \Delta t) = E\_2(\mathbf{x}, \boldsymbol{y}) \end{cases} \tag{27}$$

where the relative blurring can be denoted as:

$$\begin{split} \Lambda \sigma^{2}(\mathbf{x},\mathbf{y}) &= \gamma^{2} \{b\_{2}^{-2}(\mathbf{x},\mathbf{y}) - b\_{1}^{2}(\mathbf{x},\mathbf{y})\} \\ &= \frac{\gamma^{2}D^{2}v^{2}}{4} \mathbb{I}\left(\frac{1}{f} - \frac{1}{v} - \frac{1}{s\_{2}(\mathbf{x},\mathbf{y})}\right)^{2} - \left(\frac{1}{f} - \frac{1}{v} - \frac{1}{s\_{1}(\mathbf{x},\mathbf{y})}\right)^{2} \mathbb{I} \\ &= \frac{\gamma^{2}D^{2}v^{2}}{4} \mathbb{I}\left(\frac{1}{s\_{0}} - \frac{1}{s\_{2}(\mathbf{x},\mathbf{y})}\right)^{2} - \left(\frac{1}{s\_{0}} - \frac{1}{s\_{1}(\mathbf{x},\mathbf{y})}\right)^{2} \mathbb{I} \end{split} \tag{28}$$

Define: <sup>2</sup> <sup>2</sup> 2 22 0 1 4 11 (,) *k D v s s xy* , thus the desired depth map is:

$$s\_2(x, y) = 1 / \left(\frac{1}{s\_0} \pm \sqrt{k}\right) \tag{29}$$

In real applications, it is reasonable to discuss the following four cases when the distance between the sample and the microscope is becoming shorter and shorter.

a. s1> s2>s0

540 Mechanical Engineering

( , ,) ( , ,) [, ) (, )

( , , ) ( ( , ) ( , , )) ( , )

When the time-shifted is *t* ,the solution of the diffusion equation is

2 22 2 2 1

is a constant between the blurring radius and the blurring degree, *bi* (*i*=1,2) is the

111

*<sup>f</sup> v s*

Suppose *E*1(*x*, *y*), whose depth map is *s*1(*x*, *y*), is the defocused image attained before depth variation, and *E*2 (*x*, *y*) is another defocused image attained after depth variation, in this section, we will propose a new shape from defocus method in which the depth map *s*2 (*x*, *y*)

Microscope

*s*1(*x*,*y*)

*s*2(*x*,*y*)

1

1

2 2 1 

> 

*uxyt a uxyt a t*

*uxyt axy uxyt t*

Thus, the relation between the relative blurring and the depth map can be denoted as:

2 *Dv <sup>b</sup>*

where *s* denotes depth of the blurring point and *D* denotes the radius of the lens.

is attained through a depth change Δ*s*, and the main theory is shown in Fig.12.

0

0

<sup>2</sup> *uxy t E xy* (,, ) (,) ,and *t* can be defined as,

**3.2 The new shape reconstruction method** 

Fig. 11. The main theory of our method

△s2(*x*,*y*)

(,,) (,)

*uxy E xy*

(,,) (,)

*uxy E xy*

2 1 is called the relative blurring(Favaro et al, 2008). So Eq.(18) can be

0 0

(22)

0

2 2 ( ) *t t a ta* (24)

( ) *b b* (25)

(26)

(23)

where 2 22 

Eq.(19) becomes:

written as:

where 

radius of the burring round :

Fig. 12. The theory of case A

In this case, *E*1 (*x*, *y*) and *E*2 (*x*, *y*) are on the large side of *s*0, and *E*1(*x*, *y*) is more defocused than *E*2(*x*, *y*), so it is backward diffusion process from *E*1 (*x*, *y*) to *E*2(*x*, *y*), that is, the diffusion efficient *a* is negative. The theory is shown as Fig. 13 and the finial depth can be denoted as:

$$s\_2(x, y) = 1 / \left(\frac{1}{s\_0} - \sqrt{k}\right) \tag{30}$$

Applications of Computer Vision in Micro/Nano Observation 543

This case is a little more complicated than the first two scenarios. *E*1(*x*, *y*) is more defocused than *E*2(*x*, *y*), but they are not on the same side of *s*0. Suppose *s'*2(*x*, *y*) is the symmetrical depth of *s*2(*x*, *y*) about *s*0, the process can be transferred from Fig.15(a) to Fig.15(b), and the

> 0 1 *s xy* ( , ) 1/( ) *k*

> > (a)

Microscope

Microscope

(b)

Here, *E*1(*x*, *y*) is less defocused than *E*2(*x*, *y*), and they are not on the same side of *s*0. Suppose *s'*2(*x*, *y*) is the symmetrical depth of *s*2(*x*, *y*) about s0, the process can be transferred to

> 0 1 *s xy* ( , ) 1/( ) *k*

*<sup>s</sup>* (34)

*s*1(*x*, *y*)

*s'*2(*x*, *y*)

*s*0

*s*2(*x*, *y*)

*s*1(*x*, *y*)

*s*0

*s*2(*x*, *y*)

2 0 2 0 02 *s xy s s xy s s s xy* ( , ) -( ( , ) ) - ( , ) <sup>2</sup> (35)

2

*<sup>s</sup>* (32)

2 0 2 0 02 *s xy s s xy s s s xy* ( , ) -( ( , ) ) - ( , ) <sup>2</sup> (33)

2

finial depth can be denoted as:

d. s1>s0, s2 < s0, ( s0-s2 )> ( s1-s0 )

Fig. 15. The theory of case *D* 

Fig.16(b), and the finial depth can be denoted as:

△s2(*x*, *y*)

△s2(*x*, *y*)

#### b. s0>s1> s2

Fig. 13. The theory of case B

As is shown in Fig.14, here, *E*1(*x*, *y*) and *E*2 (*x*, *y*) are on the small side of *s*0, *E*1(*x*, *y*) is less defocused than *E*2(*x*, *y*), so it is afterward diffusion from *E*1(*x*, *y*) to *E*2(*x*, *y*), and the diffusion efficient *a* is positive. The finial depth can be denoted as:

$$s\_2(x, y) = 1 / \left(\frac{1}{s\_0} + \sqrt{k}\right) \tag{31}$$

c. s1>s0, s2 < s0, ( s0-s2 )< ( s1-s0 )

Fig. 14. The theory of case *C* 

This case is a little more complicated than the first two scenarios. *E*1(*x*, *y*) is more defocused than *E*2(*x*, *y*), but they are not on the same side of *s*0. Suppose *s'*2(*x*, *y*) is the symmetrical depth of *s*2(*x*, *y*) about *s*0, the process can be transferred from Fig.15(a) to Fig.15(b), and the finial depth can be denoted as:

$$s\_2'(x, y) = 1 \\ / \left(\frac{1}{s\_0} - \sqrt{k}\right) \tag{32}$$

$$\mathbf{s}\_2(\mathbf{x}, \mathbf{y}) = \mathbf{s}\_0 \cdot (\mathbf{s}\_2'(\mathbf{x}, \mathbf{y}) - \mathbf{s}\_0) = 2\mathbf{s}\_0 \cdot \mathbf{s}\_2'(\mathbf{x}, \mathbf{y}) \tag{33}$$

d. s1>s0, s2 < s0, ( s0-s2 )> ( s1-s0 )

542 Mechanical Engineering

*Microscope*

As is shown in Fig.14, here, *E*1(*x*, *y*) and *E*2 (*x*, *y*) are on the small side of *s*0, *E*1(*x*, *y*) is less defocused than *E*2(*x*, *y*), so it is afterward diffusion from *E*1(*x*, *y*) to *E*2(*x*, *y*), and the diffusion

(a)

(b)

0 1 *s xy* ( , ) 1/( ) *k s*

Microscope

Microscope

(31)

s1(*x*, *y*)

s0

s2(*x*, *y*)

*s*1(*x*, *y*)

*s*2(*x*, *y*)

*s*0 *s'*2(*x*, *y*) *s*1(*x*, *y*)

*s*0

*s*2(*x*, *y*)

b. s0>s1> s2

Fig. 13. The theory of case B

c. s1>s0, s2 < s0, ( s0-s2 )< ( s1-s0 )

Fig. 14. The theory of case *C* 

efficient *a* is positive. The finial depth can be denoted as:

△s2(*x*, *y*)

△s2(*x*, *y*)

△s2(*x*, *y*)

2

Fig. 15. The theory of case *D* 

Here, *E*1(*x*, *y*) is less defocused than *E*2(*x*, *y*), and they are not on the same side of *s*0. Suppose *s'*2(*x*, *y*) is the symmetrical depth of *s*2(*x*, *y*) about s0, the process can be transferred to Fig.16(b), and the finial depth can be denoted as:

$$s\_2'(x, y) = 1 \;/\left(\frac{1}{s\_0} + \sqrt{k}\right) \tag{34}$$

$$\mathbf{s}\_2(\mathbf{x}, \mathbf{y}) = \mathbf{s}\_0 \cdot (\mathbf{s}\_2'(\mathbf{x}, \mathbf{y}) - \mathbf{s}\_0) = 2\mathbf{s}\_0 \cdot \mathbf{s}\_2'(\mathbf{x}, \mathbf{y}) \tag{35}$$

Applications of Computer Vision in Micro/Nano Observation 545

In order to validate the new algorithm, we used it to reconstruct the shapes of a nano standard grid which is 500nm high, two AFM cantilevers. We used the microscope of HIROX -7700 shown as Fig.17, and magnify the grid into 7000 times. The rest parameters are

In order to investigate the influence of different region size on the algorithm, we tested the grid with three kinds of region size and two kinds of AFM cantilever. As for the grid, through comparing to the true grid, the error maps in each experiment are constructed and the mean square error of the proposed method was calculated to test the precision. When testing the AFM cantilevers, we used PI nano platform to test the reconstruction precision.

Firstly, the experiment using 120×110 pixel grid region was conducted. The results are shown in Fig.18 to Fig.21. Fig.18 is two defocused images in which the left is the image before variation and the right is that after variation; Fig.19 is the constructed 3D shape of the nano grid. In order to investigate the precision of the new algorism, we constructed the error map *Φ* between the true shape *s* and the estimated shape *s* , and computed the mean square error *φ* of the whole image. The compute formulas are shown in Eq.(41) and Eq.(42). Fig. 20

> 1 *s s*

E ( -1) *<sup>s</sup>*

2

(41)

*s* (42)

as the following: *f*=0.357mm, *s*0 =3.4mm, *F*-number =2, *D*= *f*/2.

**3.3 Experiment results** 

Fig. 17. HIROX-7700

**3.3.1 Shape reconstruction of the nano grid** 

is the true shape of the grid and Fig.21 is the error map.

As a global algorithm, we construct the following optimization problem to calculate the solutions of the diffusion equations.

$$\tilde{s} = \underset{s\_2(\mathbf{x}, \mathbf{y})}{\arg\min} \int \left( \mu(\mathbf{x}, \mathbf{y}, \Delta t) - E\_2(\mathbf{x}, \mathbf{y}) \right)^2 d\mathbf{x} dy \tag{36}$$

However, the optimization process above is ill posed(Favaro et al 2008), that is, the minimum may not exist, and even if it exists, it may not be stable with respect to data noise. A common way to regularize the problem is to add a Tikhonov Penalty:

$$\begin{split} \tilde{s} &= \underset{s\_2(\mathbf{x}, \mathbf{y})}{\operatorname{argmin}} \left\| \int \left( \mu(\mathbf{x}, \mathbf{y}, \Delta t) - E\_2(\mathbf{x}, \mathbf{y}) \right)^2 d\mathbf{x} dy \\ &+ \alpha \left\| \nabla s\_2(\mathbf{x}, \mathbf{y}) \right\|^2 + \alpha k \left\| s\_2(\mathbf{x}, \mathbf{y}) \right\|^2 \end{split} \tag{37}$$

where the additional term imposes a smoothness constraint on the depth map. In practice, we use 0 , 0 *k* which are all very small, because this term has no practical influence on the cost energy denoted as:

$$\begin{aligned} F(\mathbf{s}) &= \int \left( \mu(\mathbf{x}, \mathbf{y}, \Delta t) - E\_2(\mathbf{x}, \mathbf{y}) \right)^2 d\mathbf{x} dy \\ &+ a \left\| \nabla \mathbf{s} \right\|^2 + a k \left\| \mathbf{s} \right\|^2 \end{aligned} \tag{38}$$

Thus the solution process is equal to the following:

$$\begin{aligned} \tilde{s} &= \arg\min\_{s} F(s) \\ \text{s.t.} &\quad \text{Eq.(34)}, \text{Eq.(37)} \end{aligned} \tag{39}$$

Eq. (39) is a dynamic optimization which can be solved by the gradient flow, the algorithm can be divided into the following steps (the detailed process can be seen in literature (Favaro et al 2008)):


$$\text{6.} \qquad \frac{\partial \text{s}}{\partial t} = -F \text{'} \text{(s)} \quad \text{(40)}$$

7. Compute Eq.(26), update the depth map, and return to step(3).

So if the initial depth is known, maybe it is just a general value, the dynamic depth, as well as the expected shape, can be reconstructed.
