**5.2 Experiment by real environment**

Upon verification of real system, it aimed to confirm whether targets can be tracked continuously by tracking method. Therefore, in order to reduce influence by image processing performance, the image processing adopts simple processing by color vision. In this experiment, there is a miniature environment of 2*m* by 2*m* shown in Fig. 8 and Fig. 9. Here are three targets and those targets have each moving routes, red, blue, and green, shown in Fig. 8. The environment consists of seven servers with USB camera. The toys are used as targets instead of people tracked. This toy is a train toy with a sensor that recognizes a black line. And the train runs on along the line. The black line is used as a route a target walks and is drawn by hand. The toy is covered with a color paper to keep a certain accuracy of image processing as shown in Fig. 9. The moving speed of target is 6.8*m/second*. This environment has simulated a floor of 60*m* by 60*m* on the scale of 1/30; therefore the

A Construction Method for Automatic Human Tracking System with Mobile Agent Technology 35

moving speed of the target is equivalent to 2.0*m/second*. Consider the moving route and the

moving speed of the target, the search cycle is set to 5 seconds.

Fig. 10. Detection of target by camera on experiment environment.

Fig. 10 shows the result in the experiment. The graph of Fig. 10 is arranged in order from camera A to camera G. Horizontal axis is time and the vertical axis is the value of the

Fig. 8. Moving route of targets on experiment environment.

Fig. 9. Real experiment environment.

34 Recent Developments in Video Surveillance

Fig. 8. Moving route of targets on experiment environment.

Fig. 9. Real experiment environment.

moving speed of the target is equivalent to 2.0*m/second*. Consider the moving route and the moving speed of the target, the search cycle is set to 5 seconds.

Fig. 10. Detection of target by camera on experiment environment.

Fig. 10 shows the result in the experiment. The graph of Fig. 10 is arranged in order from camera A to camera G. Horizontal axis is time and the vertical axis is the value of the

A Construction Method for Automatic Human Tracking System with Mobile Agent Technology 37

studies and to improve image processing program by using PCA-SIFT (Ke et al. 2004) or

The authors would like to thank Tadaaki Shimizu, Yusuke Hamada, Naoki Ishibashi, Shinya Iwasaki, Hirotoshi Okumura, Masato Hamamura, and Shingo Iiyama in Tottori University.

Terashita, K.; Ukita, N. & Kidode, M. (2009). Efficiency improvement of probabilistic-

Kawaguchi, Y.; Shimada, A.; Arita, D. & Taniguchi, R. (2008). Object trajectory acquisition

Yin, H. & Hussain, I. (2008). Independent component analysis and nongaussianity for blind

Tanizawa, Y.; Satoh, I. & Anzai, Y. (2002). A User Tracking Mobile Agent Framework

Tanaka, T.; Ogawa, T.; Numata, S.; Itao, T.; Tsukamoto, S. & Nishio, S. (2004). Design and

C. R. Wren, U. M. Erdem, and A. J. Azarbayejani, Automatic pan-tilt-zoom calibration in the

N. Takemura & J. Miura, View planning of multiple active cameras for wide area

Aswin C. Sankaranarayanan, Ashok Veeraraghavan & Rama Chellappa, Object Detection,

Sommerlade, E. & Reid, I. , Probabilistic Surveillance with Multiple Active Cameras, *Robotics* 

Alexandros Iosifidis & Spyridon G. Mouroutsos, A Hybrid Static/Active Video Surveillance

F. Porikli and A. Divakaran, Multi-Camera Calibration, Object Tracking and Query

*Report*, vol. 2009–CVIM–166, pp. 241–248

vol. 2008–CVIM–163, pp. 1306–1311

*(ICRA'07)*, pp. 3173–3179, Apr. 2007.

Volume 5, Issue 1, Pages 80 – 95, January 2011

96(10), pp. 1606-1624, Oct 2008

No. 3, pp. 219–228

pp. 2699-2710

3-7 May 2010

653–656, July 2003.

3784

topological calibration of widely distributed active cameras, *IPSJ SIG Technical* 

with an active camera for wide area scene surveillance, *IPSJ SIG Technical Report*,

image deconvolution and deblurring, *Integrated Computer-Aided Engineering*, vol. 15,

"FollowingSpace", *Information Processing Society of Japan*, Vol. 43, No. 12, pp. 3775-

Implementation of a Human Tracking System Using Mobile Agents in Camera and Sensor Networks, *IPSJ Workshop of Groupware and Network services 2004*, pp. 15-20 Nakazawa, A.; Hiura, S.; Kato, H. & Inokuchi, S. (2001). Tracking Multiple Persons Using

Distributed Vision Systems, *Information Processing Society of Japan*, Vol. 42, No. 11,

presence of hybrid sensor networks, *Proceedings of the Third ACM International Workshop on Video Surveillance & Sensor Networks (VSSN'05)*, pp. 113–120, Nov. 2005.

surveillance, *Proc. IEEE International Conference on Robotics and Automation* 

Tracking and Recognition for Multiple Smart Cameras, *Proceedings of the IEEE*, vol.

*and Automation (ICRA)*, *2010 IEEE International Conference on* , vol., no., pp.440-445,

System, *Antonios Gasteratos International Journal of Optomechatronics*, 1559-9620,

Generation, *Proc. International Conference on Multimedia and Expo (ICME'03)*, pp.

SURF (Bay et al. 2008) algorithm.

**7. Acknowledgment** 

**8. References** 

extracted H-HVS color. A solid line rectangle encloses the part where a red target was discovered, the rectangle of the short dashed line encloses the part where a blue target was discovered, and a dotted line rectangle encloses the part where a green target was discovered. The plot becomes a constant pattern if target is captured correctly, because the movement of the target is periodic. However, The illuminance was actually different according to the place that the camera installed, the color was judged the black by the shadow, and the target might not be able to be caught. Therefore, the number of agents did not increase and decrease by a constant pattern. But, the Neighbor node determination algorithm was able to calculate neighbor camera nodes. However, when the target is captured by video camera, the neighbor nodes are recomputed by the algorithm, then it was confirmed that an agent might not be deployed appropriately for a few dozens milliseconds. About this result, it is necessary to improve the computation time. In graphs of Fig. 10, the plot shows the time the target captured. It is confirmed that a target is lost temporarily from the camera capturing the target. But it is confirmed that the human tracking keeps tracking targets continuously by capturing the target by other cameras.
