**1. Introduction**

526 Mechanical Engineering

Fetvaci, C. (2010b). Computer Simulation of Helical Gears with Asymmetric Involute Teeth.

Fetvaci, C. (2011). Computer Simulation of Asymmetric Involute Spur Gears Manufactured

Figliolini, G. & Angeles, J. (2003). The Synthesis of Elliptical Gears Generated by Shaper-

Lin, T., Ou, H. & Li, R. (2007). A Finite Element Method for 3-D Static and Dynamic

Muni, D.V., Kumar, S. & Muthuveerappan, G. (2007). Optimization of Asymmetric Spur

*Mechanics Based Design of Structures and Machines*, Vol.35, No.2, pp. 127-145 Salamoun, C. & Suchy, M. (1973). Computation of Helical or Spur Gear Fillets. *Mechanism and Machine Theory*, Vol. 8, No. 3, (Autumn 1973), pp. 305-322, ISSN 0094-114X Su, X. & Houser, D.R. (2000). Characteristics of Trochoids and their Application to

Tang, X., Ren, F., Jiang, Y. & Gao, S. (2008). Geometric Modeling and Dynamic Simulation of

Yang, S.-C. (2005). Yang, Mathematical Model of a Helical Gear with Asymmetric Involute

Yang, S.-C. (2007). Study on an Internal Gear with Asymmetric Involute Teeth. *Mechanism* 

Vol. 26, No. 5-6, (September 2005), pp. 448-456, ISSN 0268-3768

*and Machine Theory*, Vol. 42, No. 8, (August 2007), pp. 977-994

*Geometry and Graphics*, pp. 233-234, Dresden, Germany, August 4-8, 2008 Tsay, C.-B. (1988). Helical Gears with Involute Shaped Teeth: Geometry, Computer

*and Engineering*. Vol. 196, No. 9-12, (February 2007), pp. 1716-1728. Litvin, F.L. (1994). *Gear Geometry and Applied Theory*, Prentice Hall, New Jersey, USA

(September 2010), pp. 441-447, ISSN 1300-1884

2, (February 2000), pp. 291–304, ISSN 0094-114X

*Design*, Vol. 110, No. 4 , pp. 482-491

pp. 60-69

2003.

*Journal of The Faculty of Engineering and Architecture of Gazi University*, Vol. 25, No. 3,

by Generating-Type Cutters . *Engineers and Machinery*, Vol. 52, No. 616, (May 2011),

Cutters. *Journal of Mechanical Design*, Vol.125, No. 4, (December 2003), pp. 793-801,

Contact/Impact Analysis of Gear Drives. *Computer Methods in Applied Mechanics* 

Gear Drives for Maximum Bending Strength Using Direct Gear Design Method.

Determining Gear Teeth Fillet Shapes. *Mechanism and Machine Theory*, Vol. 35, No.

Involute Gear by Generating Method, *Proceedings 13th International Conference on* 

Simulation, Tooth Contact Analysis and Stress Analysis. *Journal of Mechanical* 

Teeth and its Analysis. *International Journal of Advanced Manufacturing Technology*,

Nowadays, micro/nano science and technology has been one of the most attractive research fields. However, real time and accurate observation in micro/nano manipulation is a top important enabling technique. Most recently, with the great development of microscopes and computer vision techniques, real time visualization, including 2D motion measurement and 3D reconstruction, on micro/nano scale is becoming possible.

As for 2D motion measurement, visual motion measurement on micro/nano scale is still an open problem. Many researchers have designed different algorithms, and most of them are based on a block matching algorithm, which locates matching blocks in a researched digital image for the purposes of distance or similarity estimation. Usually, block matching based methods can achieve better performances when the texture is not relevant, or the aliasing problem in the derivative estimation, which is caused by the large inter-frame displacements (Giachetti & Torre, 1996). Images, however, are typically processed assuming a uniform grid of pixels. While straightforward, the uniform grid representation does not scale well in a multi-scale setting, because it requires an excessive amount of refinement to capture small details in a image, including sub-pixel resolution. The motion to be estimated is, on most situations in micro/nano manipulation, small and not integer. Therefore, it is necessary to improve the existing algorithms and obtain higher precision not limited by the pixel dimension, i.e., sub-pixel motion estimation. In 1989, Anandan reaches a sub-pixel precision by locally approximating the difference function with a quadratic surface and published (Horn, 1986; Horn & Schunck,1981; Singh, 1990), however, the sub-pixel estimation resolution usually introduces more computational burden.

As far as 3D reconstruction is concerned, depth measurement, i.e., methods to attain 3D information from 2D images, is an important research field in computer vision, and now it has been one of the key techniques in many fields, such as medicine, robotics, remote sensing and micro/nano manipulation. In recent years, there are various 3D reconstruction methods, including volumetric methods, depth from stereo (DFS), depth from focus (DFF) and depth from defocus (DFD)( Yin,1999), researched and used in real applications.

Applications of Computer Vision in Micro/Nano Observation 529

measure difference, such as distance or similarity, between two images. Therefore to select a criteria to determine whether a given block in image Y matches the search block in image X

BMA based techniques usually can be divided into two classes according to the measurement criterion, including the minimal difference and the maximal similarity. The widely used object functions based on difference measurements include Sum-of-Squared-Differences (SSD), Sum-of-Absolute-Differences (SAD) which can transferred into Local-SAD (LSAD) when its intensity is locally scaled and Zero-SAD (ZSAD) with setting the average gray level difference equal to zero. If the difference minimum is replaced by the maximum of a correlation measurement, some object functions can be got, such as Normalized-Cross-Correlation (NCC)(Qi & Michale,1987), Approximate-Maximum-Direct-Correlation (AMDC)( Kim & Meng,2007), or some other variations those are all approximate

is top important, i.e., the object function.

model image and the target image.

maximum likelihood estimators(Robinson & Milanfar,2004).

Fig. 1. Motion of the continuous image *F*(*i*,*j*) with respect to the pixel grid

Here, SSD, LSAD, ZSAD and NCC are all adopted to estimate the motion between two neighbor images in a same image sequence. The theory is shown in Fig. 1. Generally, *X*(*i*,*j*) and *Y*(*i*,*j*) are referred to as the model image and the target image respectively. F(*i*,*j*) is the continuous image function, *εx,y* represents additive noise, *s*=(*sx, sy*) is the shift between the

, (, ) *X Fi j <sup>i</sup> <sup>j</sup> <sup>x</sup>*

, (, ) *Y Fi s j s <sup>i</sup> <sup>j</sup> <sup>x</sup> y y*

1 1

0 0

1 ,

*m n i j LSAD i j i u j v i j i uj v*

, , (,) [ ] *m n SSD ij i uj v i j*

1 1

0 0

The object functions for SSD, LSAD, ZSAD and NCC are respectively defined as follow,

1

(,)

*Ruv x y m n*

*Ruv x y m n Y* 

2

(3)

(4)

, , ,

*X*

(1)

(2)

Volumetric methods usually reconstruct 3D models of external anatomical structures from 2D images. They represent the final volume using a finite set of 3D geometric primitives. Then, from an image sequence acquired around the object to reconstruct, the images are calibrated and the 3D models of the referred object are built using different approaches of volumetric methods. These methods work in the object volumetric space and do not require a matching process between the images used. Thus, typically, the 3D models are built from a sequence of images, acquired using a turntable device and an off-the shelf camera (Teresa et al,in press, 2008). However, in some real applications, we do not need to reconstruct the 3D model of objects, because depth is enough to understand the 3D relationship of scenes.

DFS estimates depth from two images of the same scene captured by cameras at different positions and with different postures (Wu, 1999). Because it needs to extract and match feature points in these images, the computational task is so huge. As for DFF, it uses a mapping relation between focus and depth to estimate depth. It obtains a sequence of images with different depth, measures the focus degree using a measurement operator (Bove 1993; Nayar,1992), and attains the desired depth when the measurement value is maximal or minimal. Compared to DFS, DFF is simple in principle, but its estimation accuracy is highly related to the number of images.

DFD is first introduced by Pentland in 1987(Pentland,1987). It has been proved to be an effective depth reconstruction method by using the concept of blurring degree of region images with limit depth of field (Girod & Scherock, 1989; Pentland et al, 1994; Navar et al, 1996). Usually, DFD algorithm captures two images obtained with different camera parameters, measures blurring degree of every point, and estimates depth using the point spread function. During the past years, DFD has become attractive because 1) it requires only two images; 2) it avoids matching and masking problems; 3) it is effective both in the frequency domain and in the spatial domain (Gokstorp,1994; Subbarao & Surya,1994). However, since all above DFD methods need to capture two defocused images with changed camera parameters, they can not be used in applications with high level magnification microscopes, such as micro/nano manipulation, because on these situations, it is destructive to change camera parameters. This is the main reason why DFD has not been used in micro/nano manipulation until now.
