**6. Implementation of human-robot interaction**

The real-time gesture based human-robot interaction is implemented as an application of this system. This approach has been implemented on a humanoid robot, name"Robovie". Since the same gestures can mean different tasks for different persons, we need to maintain the gesture with person-to-task knowledge. The robot and the gesture recognition PC are connected to SPAK knowledge server. From the image analysis and recognition PC, person identity and pose names are sent to the SPAK for decisions making and the robot activation. According to gesture and user identity, the knowledge module generates executable codes for robot actions. The robot then follows speech and body action commands. This method has been implemented for the following scenario:

User: "Person\_1" comes in front of Robovie eyes camera and robot recognizes the user as "Person\_1".

Robot: "Hi Person\_1, How are you?" (speech)

Person\_1: uses the gesture "Ok"

Robot: " Oh Good! Do you want to play now?" (speech)

Person\_1: uses the gesture "YES"

Robot: "Oh Thanks" (speech)

Person\_1: uses the gesture "TwoHand"

Robot: imitates user's gesture "Raise Two Arms" as shown in Figure 19.

User, Gesture and Robot Behaviour Adaptation for Human-Robot Interaction 251

problem, in future we should maintain person-specific subspaces (individual PCA for each

In this chapter we have also described how the system can adapt with new gestures and new robot behaviours using multiple occurrences of the same gesture with user interaction. The future aim is to make the system more robust, dynamically adaptable to new users and new gestures for interaction with different robots such as Aibo, Robovie, Scout, etc. Ultimate goal of this research is to establish a human-robot symbiotic society so that they can share

I would like to express deep sense of gratitude and thanks to Dr. Haruki Ueno, Professor, Department of Informatics, National Institute of Informatics, Tokyo, Japan, for his sagacious guidance, encouragement and every possible help throughout this research work. I am grateful to Dr. Y. Shirai, Professor, Department of Human and Computer Intelligence, School of Information Science and Engineering, Ritsumeikan University, for his ingenious inspiration, suggestions and care in the whole research period. I would also like to express my sincere thanks to Professor H. Gotoda, Department of Informatics, National Institute of Informatics,Tokyo, Japan, for his valuable suggestions throughout the research. I must also thank to Dr. T. Zhang and Dr. V. Ampornaramveth for their assistance and suggestions

[Ampornaramveth, 2004] V. Ampornaramveth, P. Kiatisevi, H. Ueno, "SPAK: Software

[Aryananda, 2002] L. Aryananda, "Recognizing and Remembering Individuals: Online and

 http://commtechlab.msu.edu/sites/aslweb/browser.htm, visited on April 2004. [Augusteijn, 1993] M. F. Augusteijn, and T.L. Skujca, "Identification of Human Faces Through

[Azoz, 1998] Y. Azoz, L. Devi, and R. Sharma, "Reliable Tracking of Human Arm Dynamics by

of IEEE conference on Neural Networks, pp.392-398, 1993.

Transactions on Information and systems, Vol.E86-D, No.3, pp 1-10, 2004. [Ampornaramveth, 2001] V. Ampornaramveth, H. Ueno, "Software Platform for Symbiotic

Platform for Agents and Knowledge Systems in Symbiotic Robots", IEICE

Operations of Human and Networked Robots", NII Journal, Vol.3, No.1, pp 73-81,

Unsupervised Face Recognition for Humanoid Robot" in Proceeding of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2002), Vol. 2, pp.

Texture-Based Feature Recognition and Neural Network Technology", in Proceeding

Multiple Cue Integration and Constraint Fusion", in Proceeding of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'98), pp. 905-

person of all hand poses) for pose classification and learning.

their resources and work cooperatively with human beings.

**8. Acknowledgment** 

during this research work.

**9. References** 

2001.

1202-1207, 2002.

910, 1998.

[ASL, 2004] "American Sign Language Browser"

Person\_1: uses the gesture "FistUp" (stop the interaction)

Robot: Bye-bye (speech).

User: "Person\_2" comes in front of Robovie eyes camera, robot detects the face as unknown,

Robot: "Hi, What is your Name?" (speech)

Person\_2: Types his name "Person\_2"

Robot: " Oh, Good! Do you want to play now?" (speech)

Person\_2: uses the gesture "OK"

Robot: "Thanks!" (speech)

Person\_2: uses the gesture "LeftHand"

Robot: imitate user's gesture ("Raise Left Arm")

Person\_2: uses the gesture "RightHand"

Robot: imitate user's gesture ("Raise Right Arm")

Person\_2: uses the gesture "Three"

Robot: This is three (speech)

Person\_2: uses the gesture "TwoHand"

Robot: Bye-bye (speech)

The above scenario shows that the same gesture can be used to represent different meanings and several gestures can be used to denote the same meaning for different persons. A user can design new actions according to his/her desires using 'Robovie' and can design corresponding knowledge frames using SPAK to implement their desired actions.

Fig. 19. Example human-robot (Robovie) interaction
