**6. Conclusion and final remarks**

Methods for motion generation synchronized with laughter speech and vocalized surprise expressions were described, based on analysis results of human behaviors on facial, head, and body motions during dialogue interactions.

The effectiveness of controlling different modalities of the face, head, and upper body (eyebrow raising, eyelid widening/narrowing, lip corner/cheek raising, eye blinking, head pitch, and torso pitch motion control) and different motion control levels were evaluated using an android robot. The evaluation was conducted through subjective experiments, by comparing motions generated with different modalities and different motion control levels.

Evaluation results for laughter motion generation indicated that motion is perceived as unnatural, if only the facial expression (lip corner raising and eyelid narrowing) is controlled (without head and body motion control). The motion naturalness scores increased when head pitch, eye blinking (at the instant the facial expression turns back to neutral face), idle smile face (during non-laughter intervals), and upper-body motion are also controlled. The best naturalness scores are achieved when all modalities are controlled.

Evaluation results for surprise motion generation indicated that (1) eyebrow/ eyelid motion control is effective in changing the perceptual degrees of surprise expression, (2) upper-body motion control is effective for increasing the degrees of surprise expression and naturalness, (3) head motion is more effective for increasing naturalness (rather than surprise degree), (4) the degrees of surprise expression for different motion types are biased by the surprise degrees expressed by the voiceonly modality, and (5) utterances with high surprise degrees may be interpreted as intentional (rather than emotional or spontaneous) if they are not accompanied by upper-body motion.

In the present study, it was shown that with a limited number of DOFs (lip corner, eyelids, eyebrows, head pitch, torso pitch), natural motion could be generated for laughter and surprise expressions. Although the android robot ERICA is used as a test bed for evaluation, the described motion generation approach can be generalized for any robot having equivalent DOFs.

Remaining topics for future work include automatic detection of laughing speech intervals and surprise utterance intervals from acoustic features, in order to automate the motion generation process from the input speech signal. Prediction of surprise expression degrees from acoustic features and explicit modeling of laughter intensity are also remaining tasks for motion generation automation. The control strategy of head tilt and shake axes, the investigation of eye blinking insertion for alleviating unnaturalness caused by sudden changes in other facial expressions,

*Motion Generation during Vocalized Emotional Expressions and Evaluation in Android Robots DOI: http://dx.doi.org/10.5772/intechopen.88457*

and the detection of situation for slight smile face control are remaining topics for improving motion naturalness.
