**3.3.1 Shape reconstruction of the nano grid**

Firstly, the experiment using 120×110 pixel grid region was conducted. The results are shown in Fig.18 to Fig.21. Fig.18 is two defocused images in which the left is the image before variation and the right is that after variation; Fig.19 is the constructed 3D shape of the nano grid. In order to investigate the precision of the new algorism, we constructed the error map *Φ* between the true shape *s* and the estimated shape *s* , and computed the mean square error *φ* of the whole image. The compute formulas are shown in Eq.(41) and Eq.(42). Fig. 20 is the true shape of the grid and Fig.21 is the error map.

$$
\phi = \overleftrightarrow{\mathring{\mathbb{S}}}\_{\mathcal{S}} - 1 \tag{41}
$$

$$\boldsymbol{\phi} = \sqrt{\mathbb{E}\left[\left(\boldsymbol{\tilde{\mathbb{S}}} \Big| \, \mathrm{J}\right)^{2}\right]}\tag{42}$$

Applications of Computer Vision in Micro/Nano Observation 547

From Fig. 10-21, we can see that the new algorithm can attain good results in constructing nano grid shape, and the precision of the proposed method is very high. The mean square

 Secondly, we tested our algorithm on the grid region of 120×50 pixels, the results are shown as Fig.22- Fig.25. The mean square error is equal to -0.038 and the average error is 10.39 nm. From the figures, we can see that the error of our construction algorithm is slightly larger at the edge of the image and smaller at other region, this results from the optimization method. However, the average error is only about 2.3% and it can certainly satisfy the demand of

error of the whole image is equal to -0.048, and the average error is -9.26 nm.

Fig. 21. The error map

micro/nano magnification.

Fig. 22. The defocused images

Fig. 23. The constructed 3D shape

Fig. 19. The constructed 3D shape

Fig. 20. The true 3D shape

Fig. 21. The error map

Fig. 18. The defocused images

Fig. 19. The constructed 3D shape

Fig. 20. The true 3D shape

From Fig. 10-21, we can see that the new algorithm can attain good results in constructing nano grid shape, and the precision of the proposed method is very high. The mean square error of the whole image is equal to -0.048, and the average error is -9.26 nm.

 Secondly, we tested our algorithm on the grid region of 120×50 pixels, the results are shown as Fig.22- Fig.25. The mean square error is equal to -0.038 and the average error is 10.39 nm. From the figures, we can see that the error of our construction algorithm is slightly larger at the edge of the image and smaller at other region, this results from the optimization method. However, the average error is only about 2.3% and it can certainly satisfy the demand of micro/nano magnification.

Fig. 22. The defocused images

Fig. 23. The constructed 3D shape

Applications of Computer Vision in Micro/Nano Observation 549

Firstly, the experiment using the conductive AFM cantilever was conducted. Fig.27 is two defocused images, in which the left is the image before variation and the right is that after variation; Fig. 28(a)- Fig.28(c) are the constructed 3D shapes of the bended cantilever when

(a) (b)

(c)

Fig. 28. The constructed 3D shape for 500nm, 300nm and 100nm

the PI platform rises 500nm, 300nm and 100nm.

Fig. 27. The defocus images of the conductive cantilever

Fig. 25. The error map

Fig. 24. The true 3D shape

#### **3.3.2 Shape reconstruction of the AFM cantilever**

The raise height of the cantilevers was controlled by the Iphysik Instrumente(PI) nano platform. Furthermore, we provided the performance of the algorithm on three kinds of raise height: 500nm, 300nm and 100nm. The theory of the experiment is shown as Fig.26.

Fig. 26. The experiment theory of reconstruction AFM cantilever

The raise height of the cantilevers was controlled by the Iphysik Instrumente(PI) nano platform. Furthermore, we provided the performance of the algorithm on three kinds of raise height: 500nm, 300nm and 100nm. The theory of the experiment is shown as Fig.26.

microscope

cantilever

PI platform

Fig. 24. The true 3D shape

Fig. 25. The error map

**3.3.2 Shape reconstruction of the AFM cantilever** 

Fig. 26. The experiment theory of reconstruction AFM cantilever

Firstly, the experiment using the conductive AFM cantilever was conducted. Fig.27 is two defocused images, in which the left is the image before variation and the right is that after variation; Fig. 28(a)- Fig.28(c) are the constructed 3D shapes of the bended cantilever when the PI platform rises 500nm, 300nm and 100nm.

Fig. 27. The defocus images of the conductive cantilever

Fig. 28. The constructed 3D shape for 500nm, 300nm and 100nm

Applications of Computer Vision in Micro/Nano Observation 551

(a) (b)

(c)

Fig. 31. The constructed 3D shape for 500nm, 300nm and 100nm

Fig. 30. The defocus images of the triangle cantilever

From Fig. 28., we can see that when the PI platform rises, the top end of the conductive cantilever bends obviously, and the deflection decreases gradually from the top end to the trailing end until close to a steady value; the bended degree is a monotonic function with the raise height. In order to contrast the bended precision, we choose the section image of them on the same position, and show them in Fig.29. From it, we can see that the deflection height is proportionately increases when the platform rises, and the height difference between the top end and the steady value is exactly equal to the raise height of the PI platform.

Fig. 29. The contrast of the conductive cantilever

Secondly, the experiment using the triangle cantilever was conducted. Fig.30 is two defocus images, in which the left is the image before variation and the right is that after variation; Fig.31(a)- Fig.31(c) are the constructed 3D shapes of bended cantilever when different raise height is 500nm, 300nm and 100nm; Fig.31 is the contrast image of these three sections. From them we can get the same conclusions as the last experiment, but the sensitivity of the triangle cantilever is lower because the reconstructed shapes are a little rough.

From these experiments, we can see that, regardless of the cantilever shape, our algorithm all can reconstruct the global bended shape exactly with only two defocused images. The following conclusion can be given:


From Fig. 28., we can see that when the PI platform rises, the top end of the conductive cantilever bends obviously, and the deflection decreases gradually from the top end to the trailing end until close to a steady value; the bended degree is a monotonic function with the raise height. In order to contrast the bended precision, we choose the section image of them on the same position, and show them in Fig.29. From it, we can see that the deflection height is proportionately increases when the platform rises, and the height difference between the

> 500nm 300nm 100nm

<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> <sup>300</sup> <sup>350</sup> 0.9

Secondly, the experiment using the triangle cantilever was conducted. Fig.30 is two defocus images, in which the left is the image before variation and the right is that after variation; Fig.31(a)- Fig.31(c) are the constructed 3D shapes of bended cantilever when different raise height is 500nm, 300nm and 100nm; Fig.31 is the contrast image of these three sections. From them we can get the same conclusions as the last experiment, but the sensitivity of the

From these experiments, we can see that, regardless of the cantilever shape, our algorithm all can reconstruct the global bended shape exactly with only two defocused images. The

1. The most obvious bend of the cantilever concentrates on the region near to the tip, it is reasonable because when the PI platform works up, the stress all concentrates on the tip

2. The cantilever's original shape, material and illumination can influence the reconstruction result to some extent. For example, the conductive cantilever is thinner than the triangle cantilever, and the shape reconstruction of it is smoother due to its higher sensitivity; the black edge of the cantilever results in a little error in the result. 3. The raise height is larger, the calculation result is exacter, and the reconstruction image

4. No matter how much the raise height is, the reconstruction height tends to be steady finally. Furthermore, the height difference between the maximum value and the

original value is equal to the raise height of the PI platform.

triangle cantilever is lower because the reconstructed shapes are a little rough.

1 1.1 1.2 1.3 1.4 1.5 1.6

Fig. 29. The contrast of the conductive cantilever

following conclusion can be given:

is smoother

due to our experiment theory.

1.7 x 10-6

top end and the steady value is exactly equal to the raise height of the PI platform.

Fig. 30. The defocus images of the triangle cantilever

Fig. 31. The constructed 3D shape for 500nm, 300nm and 100nm

Applications of Computer Vision in Micro/Nano Observation 553

Bove V. M.(1993). *Entropy-based depth from focus*. Journal of Optical Society of America – A,

Ens J.; Lawrence P.(1993). An investigation of methods for determining depth from focus,

Favaro P.; Burger M. & Osher S. J.(2008). *Shape from defocus via diffusion*, IEEE Transaction of Pattern Recognition and Machine Intelligence, Vol.30, No.3, pp. 518-531. Favaro P.; Mennucci A. & Soatto S.(2003). *Observing shape from defocused images*, International

Favaro P.; Mennucci A.(2002). *Learning shape from defocus*, pp. 735~745, Proceedings of

Giachetti A.; Torre V.(1996). *Refinement of optical flow estimation and detection of motion edges*,

Girod B.; Scherock S.(1989). *Depth from defocus of structured light*, pp. 209-215, Proceedings of

Gokstorp M.(1994). *Computing depth from out-of-focus blur using a local frequency representation*,

Horn B. K. P.; Schunck, B.G..(1981). *Determining optical flow*, Articial Intelligence, Vol.

Kim J. H.; Meng C. H.(2007). *Visually servoes 3-D alignment of multiple objects with sub-*

Lagnado R.; Osher S.(1997). *A technique for calibrating derivative security pricing models:* 

Nair N., and Stewart C.(1992). *Robust focus ranging*, pp. 309-314, Proceedings of IEEE Computer Vision and Pattern Recognition, Champaign, USA, June 16-18, 1992. Navar S. K.;Watanabe M. & Noguchi M.(1996). *Real-time focus range sensor*, IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol.18, No.12, pp. 1186-1198. Nayar S. K.(1992). *Shape from focus system*, pp. 302-308, Proceedings of IEEE Computer Vision and Pattern Recognition, Champaign, IL, USA, October 14-15,1992. Nayar S. K.; Watanabe S. & Noguchi M.(1996). *Real-time focus range sensor*, IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol. 18, No.2, pp. 1186-1198. Pentland A. P.(1987). *A new sense for depth of field*. IEEE Transaction on Pattern and Machine

Pentland A. P.; Scheroch S. & Darrell T.(1994). *Simple range cameras based on focus error*, Journal of the Optical Society of America - A, 1994, Vol.11, No.11, pp. 2925-2934. Qi T.; Michale N. H.(1986). *Algorithms for sub-pixel registration*, Computer Vision, Graphics,

Robinson D.; Milanfar P.(2004). *Fundamental performance limits in image registration*. IEEE

Transactions on Image Process, Vol.13, No.9, pp. 1185-1199.

Journal of Computer Vision, Vol. 52, No.1, pp. 25-43.

IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol. 15, No.2, pp.

European Conference on Computer Vision, Copenhagen, Denmark, May 27-June 2,

Proceeding of European Conference on Computer Vision, Cambridge, UK, April

Optics, Illumination, and Image Sensing for Machine Vision IV, Philadelphia, USA,

pp.153-158 Proceedings of International Conference on Pattern Recognition,

*nanometer precision*. IEEE Transactions on Nanotechnology, Vol.7, No.3, pp.321-330.

*numerical solution of an inverse problem*, Journal of Computational Finance, Vol. 1,

Vol.10, No.4, pp.561-566.

97-108.

2002.

13-14,1996.

October 14-15,1989.

17,pp. 185–203 .

No.1, pp. 13-26.

Jerusalem, Israel, October 9-10, 1994.

Intelligence, Vol.9,No. 4, pp.523-531.

and Image Processing,Vol. 35,pp.220-233.

Horn B.(1986). *Robot Vision*. Cambridge, MA: MIT Press, 1986.

Fig. 32. The contrast of the triangle cantilever
