**3. Hardware configuration of the user interface module**

The implementation of the user interface module is illustrated in **Figure 2**. It has been used to enable the robot to perform a waypoint navigation task, in a known environment, where local features may change and obstacles moved. Intrinsic to the implementation are the hardware devices for the human to interact with the robotic system.

**Figure 2.** The GUI menu (left): an operator controlling a robot with a handheld device and a wearable display device (right).

### **3.1. Wearable transparent display**

so that he or she can view more information regarding the robot, as well as the task in the real world. In addition, the robot has an on-board laser projector to provide for spatial AR. The robot projects its status and intentions as words or symbols onto the physical floor or wall,

In the *task support* module within the framework, the human interacts with the robot through a dialogue Graphical-User-Interface (GUI), moving towards a defined task, which is acceptable to the human and executed by the robot. The role of the robot is enhanced from the traditional role of a dumb servant to that of a competent 'partner'. To highlight the need to maintain the higher status of the human in the decision-making hierarchy, we refer to this partnership as a master-partner relationship. It is inferred that the human makes the decisions whilst the robot assists the human user by considering the information on intended task, as well as the task constraints to provide appropriate task support to the human. The robot should be capable of providing suggestions to its human master and be able to learn and recognise the human's intentions [21]. The robot must also be imbued with a knowledge base to allow it to better define the problem. Only with these capabilities will the robot be able to elevate its role towards

Under the assisted mode, the human's cognitive load may be expected to be lower [22, 23] than if he were be responsible for all aspects of the task. In the performance of a task where the robot is unable to assist the human, in an appropriate manner, the human may elect to proceed without robot assistance. This would be the direct mode where the human operator determines the path, trajectory and operation parameters of the task. The model data, such as generated robot motion commands, planned trajectories, as well as the robot status are updated accord‐

depending on the nature of the desired notification.

that of a collaborating partner.

168 Recent Advances in Robotic Systems

**Figure 1.** A framework for dynamic human-robot interactions.

The human is provided with an Epson Moverio BT-200. It is a wearable device with a binocular ultra-high resolution full colour display. It incorporates a front facing camera and motion sensors. The motion sensors capture the user's head motion and the camera supports target tracking in the observer's field of view. This device is depicted on the human operator in **Figure 2**. Wearing the device allows the user to view his environment as well as any augmented data that is generated, overlaid on the real-world scene. Through the display, the robot system can provide status information and selection menus for the operator to select. The advantage of a wearable transparent LCD display is that it provides the human with an unimpeded view of his environment.

### **3.2. Multi-modal handheld device**

A novel multimodal wireless single-handheld device is as depicted in **Figure 3**. It comprises five units of spring loaded finger paddles, a nine-axis inertial measurement unit (IMU), a laser pointer and a near-infrared LIDAR sensor. With the spring-loaded linear potentiometers, a position-to-motion mapping is programmed to map individual finger displacement to a certain motion command for the robot. The hand motions of a user are sensed by the IMU, and gesture recognition is applied to interpret the human gestures. This would allow the user to interact with the robot through gestures.

**Figure 3.** Multimodal single-handed human input device.

A laser pointer is included on the device so that a user can point and define a particular waypoint location or a final destination. The LIDAR sensor is included to assist in user localization by the robot.

This device supports a one-handed gloved operation and was designed for use in an industrial scenario. In an industrial setting, the hands of a human operator are, frequently, gloved, and the use of double-handed devices is viewed as being undesirable from safety considerations.

### **3.3. Robotic platform**

The human-robot partnership framework is implemented on MAVEN [24, 25]. The robot, shown in **Figure 4**, is a holonomic robot with four mecanum wheels. It hosts an on-board computer for controlling the drive motors and its other robotic services. The Linux OS has been installed as the robot's operating system for the embedded computer. The robot operating system (ROS) is installed along with the Linux OS.

The maximum forward and lateral speeds of the robot have been limited to 0.6 m/s while the on-the-spot rotational speed has been limited to 0.9 rad/s. The various sensor and behaviour modules that have been installed on the robot include a Hokuyo laser rangefinder module, a USB camera module, an MJPEG server, a localization system, map server, as well as a path planning and a navigation module.

The robot is provided with a laser projection-based spatial AR system, which enables the projection of line graphics and text onto a suitable surface. The projected images can be used to augment reality in the traditional manner or to provide indications of the robot's intention or status. In the context of a moving robotic platform, it can project the robot's intention to move, turn, or stop. The intended path that is planned by the robot can also be projected on to the floor or road surface. During interactions, the laser graphics are used to project markers to confirm destinations or to place virtual objects for the human to confirm its desired position and orientation.

The ability to recognise the robotic platform's intentions allows humans (and other robots) to adjust their motion to avoid conflicts. This would enhance the safety of humans in the vicinity of the robot.

**Figure 4.** Robot with a laser projection-based spatial AR system.

position-to-motion mapping is programmed to map individual finger displacement to a certain motion command for the robot. The hand motions of a user are sensed by the IMU, and gesture recognition is applied to interpret the human gestures. This would allow the user to

A laser pointer is included on the device so that a user can point and define a particular waypoint location or a final destination. The LIDAR sensor is included to assist in user

This device supports a one-handed gloved operation and was designed for use in an industrial scenario. In an industrial setting, the hands of a human operator are, frequently, gloved, and the use of double-handed devices is viewed as being undesirable from safety considerations.

The human-robot partnership framework is implemented on MAVEN [24, 25]. The robot, shown in **Figure 4**, is a holonomic robot with four mecanum wheels. It hosts an on-board computer for controlling the drive motors and its other robotic services. The Linux OS has been installed as the robot's operating system for the embedded computer. The robot operating

The maximum forward and lateral speeds of the robot have been limited to 0.6 m/s while the on-the-spot rotational speed has been limited to 0.9 rad/s. The various sensor and behaviour modules that have been installed on the robot include a Hokuyo laser rangefinder module, a USB camera module, an MJPEG server, a localization system, map server, as well as a path

The robot is provided with a laser projection-based spatial AR system, which enables the projection of line graphics and text onto a suitable surface. The projected images can be used to augment reality in the traditional manner or to provide indications of the robot's intention or status. In the context of a moving robotic platform, it can project the robot's intention to

interact with the robot through gestures.

170 Recent Advances in Robotic Systems

**Figure 3.** Multimodal single-handed human input device.

system (ROS) is installed along with the Linux OS.

planning and a navigation module.

localization by the robot.

**3.3. Robotic platform**
