**2. Proposed human-robot interaction framework**

**Figure 1** presents our proposed framework for Human-Robot Interaction. The components of the framework and their inter-linkages are presented in the schematic. The central element is the interaction kernel, which directs the control flow of the application and updates the model data such as the motion commands, motion trajectories and robot status. In each update loop, the human user could use the *user interface*, which consists of a multi-modal handheld device, to provide control commands or task information for the robot to execute.

The user interface uses two different types of AR technologies to display the robot information for the human user to visualize. The human user would be wearing a see-through AR glasses so that he or she can view more information regarding the robot, as well as the task in the real world. In addition, the robot has an on-board laser projector to provide for spatial AR. The robot projects its status and intentions as words or symbols onto the physical floor or wall, depending on the nature of the desired notification.

In the *task support* module within the framework, the human interacts with the robot through a dialogue Graphical-User-Interface (GUI), moving towards a defined task, which is acceptable to the human and executed by the robot. The role of the robot is enhanced from the traditional role of a dumb servant to that of a competent 'partner'. To highlight the need to maintain the higher status of the human in the decision-making hierarchy, we refer to this partnership as a master-partner relationship. It is inferred that the human makes the decisions whilst the robot assists the human user by considering the information on intended task, as well as the task constraints to provide appropriate task support to the human. The robot should be capable of providing suggestions to its human master and be able to learn and recognise the human's intentions [21]. The robot must also be imbued with a knowledge base to allow it to better define the problem. Only with these capabilities will the robot be able to elevate its role towards that of a collaborating partner.

**Figure 1.** A framework for dynamic human-robot interactions.

Under the assisted mode, the human's cognitive load may be expected to be lower [22, 23] than if he were be responsible for all aspects of the task. In the performance of a task where the robot is unable to assist the human, in an appropriate manner, the human may elect to proceed without robot assistance. This would be the direct mode where the human operator determines the path, trajectory and operation parameters of the task. The model data, such as generated robot motion commands, planned trajectories, as well as the robot status are updated accord‐ ingly, and visualized consistently through the laser projection, the augmented reality display or a 2D graphical user interface.

This chapter presents, as concept verification, the development and evaluation of a user interface module, which uses laser graphics to implement the spatial AR technology to display the robot provided information for the human user to visualize.
