**8. Acknowledgements**

The research was funded by the Austrian Science Fund (FWF): TRP140-N23-2010. The authors would also like to thank Dr. Mark Blackburn†(Fitzwilliam Museum, UK) for providing images for 2D image analysis and Mario Schlapke (Thüringisches Landesamt für Denkmalpflege und Archäologie, Weimar, Germany) for providing the coins for 3D scanning.

## **9. References**


Marr, D. & Hildreth, E. (1980). Theory of edge detection, *Proc. of the Royal Society of London*

Automatic Coin Classification and Identification 153

Murase, H. & Nayar, S. (1994). Illumination planning for object recognition using parametric

Neubarth, S., Gerrity, D., Waechter, M., Everhart, D. & Phillips, A. (1998). Coin discrimination

Nölle, M. & Hanbury, A. (2006). MUSCLE Coin Images Seibersdorf (CIS) Benchmark

Nölle, M., Penz, H., Rubik, M., Mayer, K. J., Holländer, I. & Granec, R. (2003). Dagobert – a

Nölle, M., Rubik, M. & Hanbury, A. (2006). Results of the muscle CIS coin competition 2006,

Onodera, A. & M., S. (2002). Coin discrimination method and device. United States Patent

Pan, V. & Chen, Z. (1999). The complexity of the matrix eigenproblem, *Proc. of Annual ACM*

Reisert, M., Ronneberger, O. & Burkhardt, H. (2006). An efficient gradient based registration

Reisert, M., Ronneberger, O. & Burkhardt, H. (2007). A fast and reliable coin recognition

Rothwell, C., Mundy, J., Hoffman, W. & Nguyen, V. (1995). Driving vision by topology, *Proc.*

Ruisz, J., Biber, J. & Loipetsberger, M. (2007). Quality evaluation in resistance spot welding

Sezgin, M. & Sankur, B. (2004). Survey over image thresholding techniques and quantitative

Shah, G., Pester, A. & Stern, C. (1986). Low power coin discrimination apparatus. Canadian

Shi, X. & Manduchi, R. (2003). A study on Bayes feature fusion for image classification, *Proc.*

Sirovich, L. & Kirby, M. (1987). Low-dimensional procedure for the characterization of human

Sonka, M., Hlavac, V. & Boyle, R. (1998). *Image Processing, Analysis, and Machine Vision*, 2nd

Stoykova, E., Alatan, A., Benzie, P., Grammalidis, N., Malassiotis, S., Ostermann, J.,

Torres-Mendez, L., Ruiz-Suarez, J., Sucar, L. & Gomez, G. (2000). Translation, rotation, and

Tsuji, K. & Takahashi, M. (1997). Coin discriminating apparatus. European Patent EP0798669. Turk, M. & Pentland, A. (1991). Eigenfaces for recognition, *J. Cogn. Neurosci.* 3(1): 71–86.

Piekh, S., Sainov, V., Theobalt, C. & Thevar, T. (2007). 3-D time-varying scene capture technologies - A survey, *IEEE Trans. Circuits and Systems for Video Tech.*

scale-invariant object recognition, *IEEE Trans. Syst., Man and Cybern.* 30(1): 125–130.

technique for coin recognition, *Proc. Muscle CIS Coin Competition Workshop*, pp. 19–31.

by analysing the weld fingerprint on metal bands by computer vision, *Int. J. of Adv.*

*Proc. of the Muscle CIS Coin Competition Workshop*, Berlin, Germany.

*Symposium on Theory of Computing*, Atlanta, GA, USA, pp. 507–516.

*of International Symposium on Computer Vision*, pp. 395U–400. ˝

performance evaluation, *J. Electron. Imaging* 13(1): 146–165.

*Conf. Comput. Vis. Patt. Recogn. Workshop*, pp. 95–103.

faces, *Journal of the Optical Society of America A* 4: 519–524.

edn, PWS - an Imprint of Brooks and Cole Publishing.

new coin recognition and sorting system, *Proc of Int. Conf. on Digital Image Computing*

eigenspaces, *IEEE Trans. Patt. Anal. Mach. Intell.* 16(12): 1219–1227.

apparatus. Canadian Patent CA2426293.

*– Techniques and Applications*, pp. 329–338.

system, *Proc. of DAGM*, pp. 415–424.

*Manuf. Tech.* 33(9-10): 952–960.

Patent CA1336782.

17(11): 1568–1586.

Competition 2006, *IAPR Newsletter* 28(2): 18–19.

B-207: 187–217.

US2002005329.


26 Will-be-set-by-IN-TECH

Davidsson, P. (1996). Coin classification using a novel technique for learning characteristic

Duda, R. & Hart, P. E. (1972). Use of the hough transformation to detect lines and curves in

Fisher, N. (1995). *Statistical analysis of circular data*, Cambridge University Press, chapter 2:

Fukumi, M., Omatu, S., Takeda, F. & Kosaka, T. (1992). Rotation-invariant neural pattern

Fürst, M., Kronreif, G., Wögerer, C., Rubik, M., Holländer, I. & Penz, H. (2003). Development

Hartley, R. & Zisserman, A. (2003). *Multiple View Geometry in Computer Vision*, Cambridge

Hibari, E. & Arikawa, J. (2001). Coin discriminating apparatus. European Patent EP1077434. Hödlmoser, M., Zambanini, S., Kampel, M. & Schlapke, M. (2010). Evaluation of historical 3D coin models, *Proc. Conf. Computer Appl. and Quantitative Methods in Archaeology*. Hoßfeld, M., Chu, W., Adameck, M. & Eich, M. (2006). Fast fast 3D-vision system to classify

Hu, M.-K. (1962). Visual pattern recognition by moment invariants, *IRE Transactions on*

Huber-Mörk, R., Zaharieva, M. & Czedik-Eysenberg, H. (2008). Numismatic object

Huber-Mörk, R., Zambanini, S., Zaharieva, M. & Kampel, M. (2010). Identification of ancient

Huber, R., Ramoser, H., Mayer, K., Penz, H. & Rubik, M. (2005). Classification of coins using

Kampel, M., Huber-Mörk, R. & Zaharieva, M. (2009). Image-based retrieval and identification

Kampel, M. & Zambanini, S. (2008). Coin data acquisition for image recognition, *Proc. Conf.*

Kittler, J., Hatef, M., Duin, R. & Matas, J. (1998). On combining classifiers, *IEEE Trans. Patt.*

Kurita, T., Hotta, K. & Mishima, T. (1998). Scale and rotation invariant recognition method

Leonardis, A., Bischof, H. & Maver, J. (2002). Multiple eigenspaces, *Pattern Recognition*

Lewis, J. (1995). Fast normalized cross-correlation, *Proc. of Vision Interface*, pp. 120–123. Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints, *Int. J. of Comput.*

using higher-order local autocorrelation features of log-polar images, *Proc. Asian*

an Eigenspace approach, *Pattern Recogn. Lett.* 26(1): 61–75.

*Computer Applications and Quantitative Methods in Archaeology*.

*Engineering Appl. of Artif. Intell. & Expert Syst.*, pp. 403–412.

pictures, *Comm. ACM* 15: 11–15.

Descriptive methods, pp. 15–37.

3: 272–279.

Vol. 1, pp. 185–189.

University Press.

5(4): 47–63.

*Information Theory* 8: 179–187.

press, published online July 11, 2010).

*Anal. Mach. Intell.* 20(3): 226–239.

*Conf. Comput. Vis.*, Vol. II, pp. 89–96.

35(11): 2613U2627. ˝

*Vision* 60(2): 91–110.

of ancient coins, *IEEE Intell. Syst.* 24(2): 26–34.

*Computing*, pp. 368–379.

decision trees by controlling the degree of generalization, *Proc. Conf. Industrial &*

recognition system with application to coin recognition, *IEEE Trans. Neural Netw.*

of a mechatronic device for high speed coin sorting, *Proc. Conf. Industrial Technology*,

metallic coins by their embossed topography, *Elec. Let. on Comp. Vis. and Image Anal.*

identification using fusion of shape and local descriptors, *Proc. Symp. on Visual*

coins based on fusion of shape and local features, *Machine Vision and Applications* (in


Tsuji, K. & Takahashi, M. (1997). Coin discriminating apparatus. European Patent EP0798669.

Turk, M. & Pentland, A. (1991). Eigenfaces for recognition, *J. Cogn. Neurosci.* 3(1): 71–86.

**8**

*Tunisia* 

**Non-Rigid Objects Recognition: Automatic** 

Mehrez Abdellaoui1, Ali Douik1 and Kamel Besbes2

*1National Engineering School of Monastir,* 

*2Faculty of Sciences of Monastir,* 

**Human Action Recognition in Video Sequences** 

Non-rigid objects recognition is an important problem in video analysis and understanding. It is nevertheless a challenging task to achieve due to the properties carried out by the nonrigid objects, and is more complicated by camera motion as well as background variation. Human body recognition in video sequences is the best application of the non-rigid objects recognition due to the large capacities of the human body in doing actions and poses. These difficulties prohibit practical attempts toward conceiving a robust global model for each action class. Human body recognition is highly interesting for a variety of applications: detecting relevant activities in surveillance video, summarizing and indexing video sequences. It relies, however, on the interpretation of the body movements and classifies

A considerable amount of previous work has addressed the question of human action categorization and motion analysis. One line of work is based on the computation of correlation between volumes of video data (Efros et al., 2003). Another popular approach is to track body parts at first and then uses the obtained motion trajectories to perform action recognition (Ramanan & Forsyth, 2004). The robustness of the approach is highly dependent on the tracking system. Alternatively, researchers have considered the analysis of human actions by looking at video sequences as space-time intensity volumes (Bobick & Davis, 2001). Some researchers have also explored unsupervised methods for motion analysis such as hierarchical dynamic Bayesian network model (Hoey, 2001; Zhong et al., 2004). Another approach uses a video representation based on spatiotemporal interest points (STIPs). In spite of the existence of a fairly large variety of methods to extract interest points (IPs) from static images Harris corner detector (Harris & Stephens, 1988), Scale invariant feature transform (Lowe, 1999), Salient regions (Kadir & Brady, 2003) …, less work has been done on STIPs detection in videos. In 2005, Laptev (Laptev, 2005) present a STIPs detector based on the idea of the Harris IPs operators. They detect local structures in space-time where the image values have significant local variations in space and time dimension. IPs extracted with such methods had been used as features for human action classification. These points are particularly interesting because they focus the initial information contained in any image in a few specific points. The integration of the time component can perform filtering on the

IP and keep only those who also have a temporal discontinuity.

**1. Introduction** 

them in different events.

