**7. Conclusion**

This chapter describes users, gestures and robot behaviour adaptation method for human robot interaction by integrating computer vision and a knowledge-based software platform. In this method the user defines frames for users, poses, gestures, robots and robot behaviours. This chapter presents a multi-cluster based interactive learning approach for adapting new users and hand poses. However, if a large number of users use a large number of hand poses it is impossible to run this systems in real time. To overcome this problem, in future we should maintain person-specific subspaces (individual PCA for each person of all hand poses) for pose classification and learning.

In this chapter we have also described how the system can adapt with new gestures and new robot behaviours using multiple occurrences of the same gesture with user interaction. The future aim is to make the system more robust, dynamically adaptable to new users and new gestures for interaction with different robots such as Aibo, Robovie, Scout, etc. Ultimate goal of this research is to establish a human-robot symbiotic society so that they can share their resources and work cooperatively with human beings.
