**1. Introduction**

Vocalized emotional expressions such as laughter and surprise (usually accompanied by verbal interjectional utterances) often occur in daily dialogue interactions, having important social functions in human-human communication. Laughter and surprise utterances are not only simply related to funny or emotional reactions but also can express an attitude (like friendliness or interest) [1, 2].

Therefore, it is important to account for such vocalized emotional/attitudinal expressions in robot-mediated communication as well. Since android robots have a highly humanlike appearance, natural communication with humans can be achieved through several types of nonverbal information, such as facial expressions and head/body gestures. There are numerous studies regarding facial expression generation in robots [3–11]. Most of these are related to symbolic (static) facial expression of the six traditional emotions (happy, sad, anger, disgust, fear, and surprise). However, in real daily interactions, humans can express several types of emotions and attitudes by making subtle changes in facial expression and head/ body motion.

When expressing an emotion, humans not only use facial expressions but also synchronize other modalities, such as head and body movements as well as vocalic expressions. Due to a high humanlike appearance in androids, the lack of a modality or of a suitable synchronization among different modalities can cause a strongly negative impression (the "uncanny valley"), when an unnatural facial expression or motion is produced. Therefore, it is important to clarify methodologies to generate motions that look natural, through appropriate timing control.

The author's research group has been working on improving human-robot communication, by implementing humanlike motions in several types of humanoid robots. So far, several methods for automatically generating lip and head motions of a humanoid robot in synchrony with the speech signal have been proposed and evaluated [12–15]. Throughout the evaluation experiments, it has been observed that more natural (humanlike) behaviors by a robot are expected, as the appearance of the robot approaches the one of a human, such as in android robots. Furthermore, it has been observed that unnaturalness occurs when there is a mismatch between voice and motion, especially during short-term emotional expressions, like in laughter and surprise. To achieve a smooth human-robot interaction, it is essential that natural (humanlike) behaviors are expressed by the robot.

In this chapter, motion generation for two vocalized emotional expressions, laughter and surprise, is being focused on. These are usually shorter in duration in comparison to other emotion expressions like happiness, sadness, anger, and fear, and thus it is important to account for a suitable timing control between voice and movements of facial parts, head, and body. The control of different modalities is investigated for achieving natural motion generation during laughter and surprise events of humanoid robots (i.e., when the robot produces a laughter or a vocalized surprise reaction).

In Section 2, related works on motion analysis and generation during emotion expression are presented. In Section 3, the motion generation methods for laughter and surprise expressions are described, along with the motion control methods of an android robot. The motion generation methods are based on analysis results of human behaviors during dialogue interactions [16, 17]. Sections 4 and 5 present evaluation results on the effectiveness of controlling different modalities of the face, head, and upper body (eyebrow raising, eyelid widening/narrowing, lip corner/cheek raising, eye blinking, head motion and torso motion control) and different motion control levels for laughter and surprise expressions. The effects of each modality are investigated through subjective experiments using an android robot as test bed. Section 6 concludes the chapter and presents future work topics. The contents of this chapter are partially included in the author's previously published studies [18, 19]. Readers are invited to refer to those studies, for more details on the motion analysis results.
