**7. References**

168 Advances in Object Recognition Systems

The best accuracy is obtained for running action while boxing action has the lowest

The developed approach leads to interesting results compared to other algorithms for human action recognition. All these methods use STIPs to characterize movements without tracking algorithms or background segmentation. Our approach is also comparable to methods based on tracking or segmentation. In Table 4, we illustrate the classification of

> Method Year Accuracy Our Method 2011 95,17 % Xunshi et al. 2010 90,30 % Ikizler et al. 2009 89,40 % Niebles et al. 2008 83,33 % Dollár et al. 2005 81,17 %

In this chapter we presented the approach developed for the human action recognition using spatiotemporal interest points STIPs. The STIPs were detected by the application of Laptev STIPs detector. Our classification approach is based on a parameter vector deduced from different studies. The first concerns STIPs number in 100 frames, the second studies the evolution of this number in each frame of the sequence while the third classifies the STIPs in spatiotemporal boxes associated to different parts of the body. For classification we used the k-means classifier. The approach developed has leaded to good performances compared to

As we have only considered K-means as the classification algorithm, we are actually implementing SVM and pLDA algorithms and we plane to make a comparative study

Fig. 10. Confusion matrix for KTH human action database

different approaches according to their accuracy.

the well known methods for human action recognition.

**6. Conclusion** 

accuracy. The overall recognition rate of our approach exceeds 95%.

Table 4. Classification of different approaches according to their accuracy


170 Advances in Object Recognition Systems

Niebles, J., Wang, H., & Fei-Fei, L., (2008). Unsupervised learning of human action

Oikonomopoulos, A., Patras, I., & Pantic, M., (2006). Spatiotemporal Salient Points for Visual

Ramanan, D., & Forsyth, D. A., (2004). Automatic annotation of everyday movements. In:

Simac-Lejeune, A., Rombaut, M., & Lambert, P., (2010). Points d'intérêt spatio-temporels

Schuldt, C., Laptev, I., & Caputo, B. (2004). Recognizing human actions: a local svm

32–36, ISBN 0-7695-2128-2, Cambridge, England, UK., August, 23-26, 2004. Xunshi, Y., & Yupin, L. (2010). Making full use of spatial-temporal interest points: an

Zhong, H., Shi, J., & Visontai,M. (2004). Detecting unusual activity in video. *Proceedings of the* 

819–826, ISBN 0-7695-2158-4, Washington DC, USA, June 27–July 2, 2004.

Vol.79, No.3 (September 2008), pp. 299–318, ISSN 0920-5691.

(Eds.), Vol.16, ISBN 0-262-20152-6 Cambridge: MIT Press.

No.3, (June 2006), pp. 710-719, ISSN 1083-4419.

Bordeaux, France, october, 13-15, 2010.

China, September, 26–29, 2010.

categories using spatial-temporal words, *International Journal of Computer Vision*

Recognition of Human Actions, *IEEE Trans. Sys. Man. and Cybernetics,* Part B Vol.36,

*Advances in neural information processing systems,* Thrun, S.; Saul, L.; & Schölkopf, B.,

pour la détection de mouvements dans les vidéos. *Proceedings of MajecSTIC 2010,* 

approach. *Proceedings of the 17th International Conference on Pattern Recognition,* pp.

ADABOOST approach for action recognition, *Proceedings of IEEE 17th International Conference on Image Processing*, pp. 4677- 4680, ISBN 978-1-4244-7992-4, Hong Kong,

*2004 IEEE computer society conference on computer vision and pattern recognition,* pp.
